CHAPTER TWELVE

WELFARE SERVICES AND POLICIES

Health Care, Education, and Social Services

It was in the late Roman Empire that health care started as a service to local populations living near a military compound. After the fall of the Roman Empire, this service was taken over by the Roman church. In Catholic countries this continued to be so until the twentieth century, while in Protestant countries health care transferred to the local authorities from the sixteenth century on. Urbanization and industrialization brought about living circumstances that encouraged epidemics (malaria, typhoid fever, diphtheria, and so forth). From the middle of the nineteenth century, doctors and nurses advocated government policies for public health care. What models of health care exist today in the world?

Next to basic health care, education is a major instrument to assure that each citizen has the ability to practice full citizenship in the public realm and to pursue a worthwhile career. This too was, at least in the Western world, a church affair (until the sixteenth century in Protestant countries and until the nineteenth century in Catholic countries). Currently, there are interesting differences between countries. For instance, in the United States, a nation with an allegedly weak state tradition, 90 percent of education is provided through the public school system, while in a strong state such as the Netherlands, about half the schools are private (and mostly denominational). What variation is there in education policy and systems across the globe?

In this chapter we will also focus on policies for the physically disabled and the mentally ill, the indigent, the sick, the elderly, the unemployed, and so on. Modern social services started around the mid-eighteenth century when, under the auspices of Poor Laws, leisured women, denied of self-expression in education, employment, and politics, manifested their talents in charitable work. Freed from household responsibilities by their maids and servants, they were keen fund-raisers and visited the poor in their homes, in workhouses and hospitals, and in asylums and prisons. This maternal welfare state, as Skocpol called it, was replaced everywhere by a paternal welfare state that focused its support on the breadwinner. What variation is there across the globe in social service policies?

Health Care

The modern state in its fullest expression provides a range of services for which there is no historical precedent, and perhaps health care in the broadest sense of the word represents that modern state best. To what extent do and ought states feel responsible for providing health care? Hubert H. Humphrey, speaking on November 4,1977, at the dedication of a building named after him in Washington, D.C., said it thus:

The moral test of government is how it treats those who are in the dawn of life, the children; those who are in the twilight of life, the aged; and those who are in the shadows of life, the sick, the needy and the handicapped.

We could not agree more, and yet there is great variation across the globe, and there is even variation among Western countries. Especially the health care system of the United States is sharply contrasted to that of other countries: “In a society that has long emphasized self-reliance, it is not surprising that the United States leaves its citizens more exposed than their counterparts abroad to the principal hazards of life, such as sickness, poverty in old age, violence, unemployment, and injury on the job.” (Bok, 1996, p. 374) The American distrust of government comes at a price: “The real victims are the millions of poor or shelterless or medically indigent who have been told, over the years, that they must lack care or life support in the name of their very own freedom. Better for them to starve than be enslaved by ‘big government.’ That is the real cost of our anti-government values.” (Wills, 1999, p. 21) In this section we will look at the health care systems of Singapore, the Netherlands, Argentina, and Australia.

Singapore: The “3Ms” triad—Medisave, Medishield, and Medifund

Being a tiny island with no natural resources, Singapore has faced a most uncertain future from its inception in 1819 as a British colonial outpost throughout independence (1965). Such circumstances have forged a strong government-people partnership, which innately disposes people to place the common good above self-interest. Singapore did inherit a largely tax-based and publicly provided health care system (information in this section from Lim, 2004a, 2004b, 2005; Bar, 2001; Pauly, 2001).

When Singapore moved into first-world status, its government replaced government regulation with archetypal middle-class mechanisms of financial constraint and self-regulation that were supposed to be more sustainable in the long run. Although hospital care was free and government clinics were subsidized directly by the government under the previous system, there was no immediate funding problem with the old system; quite the opposite: Health costs fell dramatically as a proportion of gross domestic product (GDP) between 1960 and the introduction of Medisave in 1984. Medisave, the compulsory savings plan, was not an attempt to ameliorate the effects of a laissez-faire health system, but rather a bold venture to inject market forces into government-funded health care.

The inception of the “3Ms” was premised upon self-responsibility alongside the economic principle that health care services should not be supplied freely on demand without reference to price. These 3Ms were an explicit breakaway of Singapore from a British-style national health service, which was perceived as neither viable nor sustainable. Singapore's “no free lunch” philosophy has underpinned its rapid economic growth, so Singapore's government has eschewed egalitarian welfarism in favor of market mechanisms to allocate finite resources.

The Central Provident Fund (CPF) is a national superannuation scheme, which is a compulsory, tax-exempt, and interest-yielding savings account started in 1955 to provide financial protection for workers in their old age. Over the years, the plan has been modified to allow for preretirement withdrawals to purchase homes, buy home mortgage insurance, and even invest in “blue chip” stocks and pay for children's college expenses. Singaporeans contribute approximately 36 percent of their gross salaries to the CPF, half of which comes from their employers. Medisave was created in 1984 as an extension of the larger CPF. Medisave represents 6 to 8 percent of wages (depending on age) sequestered from the individual's CPF account to cover medical expenses such as hospitalization and acute care in later life. It can be used for convalescent hospitals, hospices, and certain outpatient treatments like day surgery, radiotherapy, and so forth. There is an element of risk pooling among family members, as it can be used to pay the bills of one's spouse, children, siblings, or parents. Any unspent balance in Medisave is passed on to the account holder's beneficiaries upon his or her death.

Medishield was introduced in 1990 as low-cost top-up catastrophic illness insurance (with premiums payable from Medisave) to supplement Medisave. Medifund was launched in 1993 as a means-tested public safety net of last resort for the needy. Since the 3Ms formula was not designed to take into account the long-term care needs of the elderly, new plans like Eldercare and Elderfund have been added to finance long-term care of the elderly. The government has periodically topped up (from budget surpluses) the various plans to benefit the less well off and the elderly. The Singaporean system has tried to reconcile the Singaporean ingrained aversion to welfare with the reality that for both economic and political reasons, it must ensure the provision of health services to the whole population, including low-income earners and the poor.

Health care provision comprises a mix of eight public hospitals and five specialty centers, which together account for 80 percent of inpatient beds with 13 private hospitals accounting for the remaining 20 percent. Primary health care is accessible through an extensive and convenient network of private general practitioners (80 percent) and public outpatient polyclinics (20 percent). An estimated 12 percent of daily outpatient attendances are by traditional Chinese practitioners in the private sector. Since 1985, every public sector hospital has been restructured and granted autonomy in operational matters in order to infuse private sector efficiency and financial discipline, but the government retained 100 percent ownership of hospitals. Endeavoring to inject market forces into the system, the restructured hospitals, initially managed by a monolithic government company, underwent further reorganization in 2000, bifurcating into two competing clusters—the National Healthcare Group and the Singapore Health Services, but ultimately reporting to the Ministry of Health (MOH). Until 1999, the MOH upheld a structural approach with strong regulation, organization, and management, assuming that good doctors and facilities would be conducive to good processes and outcomes. For the past decade, Singapore has switched to a broader multidimensional concept while monitoring clinical indicators and medical errors.

The first National Quality Control Circle Convention was held in 1982, and ever since, health care managers have embraced total quality management (TQM) and quality improvement principles. No hospital or specialist medical center in either the public or private sector has been without a quality committee, usually chaired by the hospital's chief executive officer. Such a committee has overseen all quality-related initiatives organization-wide and kept track of key performance indicators. Health care managers lacked the knowledge and tools for objective measurements and evaluations of clinical quality at first, but that has started to change. Hospitals were monitoring a handful of clinical indicators such as unplanned readmissions and nosocomial infections, but absent standardization, no valid comparisons could be made. Owing to growing awareness of progress in the science of performance measurement in tandem with trends in Western industrialized countries, health care providers have adopted a systematic and scientific approach to clinical quality.

Since 1990, all health-related organizations such as hospitals, polyclinics, insurance companies, pharmacies, and the CPF board have been linked by MediNet, an electronic data interchange system comprising six components: central claims processing, national patient master index, procurement, information, service, and notification. This extensive use of information technologies serves valuably to compile accurate and timely clinical data. In 2000, the MOH mandated all acute care public and private hospitals to take part in the Maryland Quality Indicator Project (QIP), which has involved monitoring a set of clinical quality indicators while benchmarking them against national and international norms. Monitoring inpatient mortality, perioperative mortality, device-associated infection in the ICU, and so forth has provided hospitals with comparative feedback in the form of quarterly reports and data analyses that spurred them into action to improve quality. Hospitals in both the public and private sectors have engaged quality managers full time to measure clinical processes and outcomes instead of simply leaving it to individual doctors to decide what works best for their patients.

In 2001, five former national agencies were amalgamated into a new statutory board, the Health Sciences Authority (HSA), to regulate health products (blood, drugs, supplements). In 2002, the MOH set up a Health Regulation Division out of the former Medical Audit and Accreditation Unit, with broad responsibilities for licensing and accreditation, legislative enforcement, surveillance, clinical audit, and quality assurance, including implementing clinical pathways and best practices in disease management.

The question of whether the 3Ms system had managed to contain costs was a bone of contention. Barr (2001) contended that Medisave, the institutional linchpin of the “personal responsibility” attitude, cultivated frugality because everyone who could possibly afford to pay anything was paying something for his or her own health care. Pauly (2001) pointed to the low share of GDP spent on health care compared with other industrialized countries. However, exogenous factors such as the nation's unique ethnic and cultural mix, the healthy diet of the largely Chinese population, and the absence of a significant underclass may account for that alleged thrift. Looking at crude figures, Singapore seemed to have it both ways: Patient satisfaction was reportedly high (85 percent), an average waiting time for elective surgery was two weeks, standards of health were among the highest in the first world, and all of this has been delivered at an affordable 3 to 4 percent of GDP for the last three decades. This responsive, economical, and high-quality health system means that Singaporeans appear to be getting good value for their money.

The Netherlands: Dutch-Managed Competition—Getting the Full Monty

In countries such as the Netherlands, Belgium, and Germany, some of the large-scale health insurance funds owe their existence to local initiatives taken in the nineteenth century, when groups of people with a social conscience (e.g. community leaders, employers, physicians, nurses) worked together to provide more secure financial circumstances and medical care for those in need (information in this section from Companje and others, 2009; Helderman and others, 2005; Götze, 2010; Van de Ven and Schut, 2008; Okma and others, 2010; Enthoven and Van de Ven, 2007; Tuijn and others, 2011; Schoen and others, 2009). In the 1980s, demographic aging had already loomed large, but it was not until the 1990s that the health “dossier” gained a European dimension, when European policy makers paid heed to “social Europe” in addition to “economic Europe.” The Treaty of Rome of 1957 endorsed free-market mechanisms and rejected government monopolies not solely for services such as telecommunications, post, and transport, but also for health insurance.

In the wake of European integration, the Netherlands adopted a stringent Competition Act. A new Dutch Competition Authority (NMa) made it clear that it would safeguard any room for competition in health care created by the government, bearing in mind that the search for specialist treatment or a bed in an old-age home no longer stops at a country's borders.

In 1941, German authorities occupying the country coerced their new subjects into a mandatory health insurance plan for low- and middle-income groups. By the end of the 1960s, the government feared that rising health care costs would jeopardize universal access and inflate labor costs to the extent of raising unemployment to the detriment of the Dutch export-based economy. In light of such concerns, both supply and regulation have been amplified from the mid-1970s forward. Following the economic shock of the oil crisis in 1973 alongside fears of insufficient governmental planning to accommodate an aging population, welfare policies changed in the early 1980s. The Health Care Prices Act (1982) circumscribed physicians' fees and later also their total revenues; medical specialists were forced to forgo fee-for-service (FFC) payments in favor of a “lump-sum payment” per hospital (or institution) for all specialists working in that hospital. Open-ended hospital reimbursement had given way to a budgeting system that was subsequently expanded to all other inpatient care institutions.

In 1986, the center-right government assigned the Dekker Committee to draft a proposal for reform. Issued in 1997, the recommendations of the Dekker Committee ripened into the Health Insurance Act (HIA) of 2006. During those two decades, incremental changes laid the way open for regulated competition. Reflecting the rise of New Public Management ideas in the 1980s and hinging upon the ideas of the American economist Alain Enthoven about “managed competition,” the Dekker plan portrayed a market of mandatory insurance with competing health insurers and open enrollment. Seventy-five percent of the plan was meant to be financed by income-related contributions paid to a central fund, redistributing them back to insurers based on their risk structure in order to level the playing field and remove incentives for risk selection and cream skimming.

The first health reform bill passed in 1989, but the process came to a halt. The 1994 “purple coalition” shelved the reforms and opted instead for piecemeal improvement. The Dekker plan rose from the ashes when the 2003 governing coalition of Liberal Conservatives and Christian Democrats decided to take up its basic ingredients and place even greater emphasis on market competition. On January 1, 2006, the Dutch government enacted the HIA. The new law obligated each person legally living or working in the Netherlands to buy individual private health insurance from a private insurance company. This basic health insurance has supplanted the former mix of (social) public and private health insurance.

Each basic health insurance plan administered a legally prescribed package of benefits and entitlements, including preventive services, inpatient and ambulatory medical care, prescription drugs, and medical aids. Insurers have been legally bound to accept each applicant for a basic insurance contract at a community-rated premium without excluding coverage due to preexisting conditions. Funding has consisted of a mix of direct contributions, earmarked taxes, and government subsidy. Employees have had to pay an income-related contribution for which employers have been compelled to compensate their employees regardless of their chosen insurer, and that compensation has constituted part of employees' taxable income. Other adults have had to pay a community-rated premium directly to their chosen insurer. The tax collector, in turn, has transferred contributions to a Risk Equalization Fund (REF). For high-risk insured people, insurers have received a high-risk-adjusted equalization payment from the REF. For low-risk insured people, insurers have had to pay an equalization payment to the REF. Hence, insurers' incentives for risk selection have been substantially reduced, though not utterly removed. Insurers have been at liberty to give premium rebates to groups (up to 10 percent for mandatory basic insurance, but higher for supplementary health insurance or other insurance products) organized by employers, sports associations, and so forth. An organizer of a group could selectively enroll preferred members only. The government has paid for all costs incurred by children. Households have collected a care allowance in case the average community-rated premium exceeded a certain proportion of their income. About two-thirds of all households have been enjoying that allowance since the HIA's enactment. People could supplement their insurance with benefits not included in the mandatory basic package such as dental care for adults, physiotherapy, eyeglasses, alternative medicine, and cosmetic surgery. For such extra insurance, insurers could select risks, refuse applicants, or levy risk-adjusted premiums. Around 93 percent of insured people have opted for supplemental insurance (usually from the same insurer providing their basic coverage), so insurers have had opportunities to select.

The Dutch Health Care Inspectorate (IGZ), an independent agency within the Ministry of Health, Welfare, and Sport, has been monitoring quality of care by enforcing 25 laws such as the Care Institutions Quality Act. That prime regulator has been carrying out regular inspections, responding to complaints with focused investigations, and monitoring performance on a regular basis. Apart from addressing specific issues (often at the request of the minister or the parliament) and responding to calamities that unearth structural shortcomings in care provision, since 2002 IGZ has been developing risk-based indicators to assess the quality of health care. Indicators have been devised in cooperation with representatives of the health care sector. Every institution deemed unsatisfactory by IGZ inspectors has been compelled to improve and subject to follow-up visits; if deficiencies have not been rectified, IGZ could sanction penal measures. When compared with health systems of other developed countries, the rather new Dutch system seems to pull through, offering responsive, high-quality health care.

Argentina: Promoting a Health Care System under a Sick Economy

Dating back to the 1940s and 1950s, Argentina's fragmented health system has been an upshot of protracted struggles between the Ministry of Health (MOH) and the Ministry of Labor and Welfare (information in this section from Lloyd-Sherlock, 2005, 2006; Barrientos and Lloyd-Sherlock, 2000; Belmartino, 2000; Cavagnero, 2008; Cavagnero and Bilger, 2010; Rubinstein and others, 2007). The former erected and presided over an impressive national network of publicly funded hospitals and other health facilities while the latter aspired to buttress labor unions via Social Health Insurance (SHI). Over the next decade, it became clear that the SHI sector had won, and direct state funding to the MOH was scaled back and became more erratic. SHI had required affiliates to make compulsory contributions to a fund, which in return provided services or contracted them out to third-party providers. Contributions have been set at a certain percentage of gross salaries, deducted from workers' monthly payrolls, and sometimes matched by contributions from employers and the state. Those who had not contributed, or were not dependents of contributors (usually immediate family members), were not entitled to services provided by SHI funds.

Argentina's SHI evolved out of mutual aid societies that hark back to the start of the twentieth century. The sector had come under the supervision (but not control) of the Ministry of Labor and Welfare in 1946, and affiliation became compulsory for workers in 1970. By the 1990s, SHI was largely in the hands of 300 or so different obras sociales, most of which were run by trade unions and theoretically regulated by, but autonomous of, the state. Each fund had monopolistic rights over a demarcated sector of the labor force, and workers could not choose between funds. Most funds were too small to provide services, so they contracted out to private clinics and hospitals, giving rise to a large private sector. Absent state regulation, that purchaser-provider split rendered the system of contracting and subcontracting chaotic and unaccountable and was by no means conducive to efficiency and competition. In terms of funding then, Argentina's health system has consisted of three pillars: the publicly financed sector, social insurance funds (known as obras sociales), and private plans.

Until recently, obras sociales were exempt from catering to members once they retired. El Programa de Atención Médica Integral (PAMI), a separate health insurance fund for pensioners, was designed to fill that gap. As with obras sociales, PAMI mainly contracted out to private providers, although there was also “leakage” of affiliates to the public sector. PAMI has been financed through a separate wage tax and provided a range of services to all insured people aged 60 or over and in theory noninsured aged 70 or over.

SHI funds administered by the obras sociales ran at the national (obras sociales nacionales, OSNs) and provincial level (obras sociales provinciales, OSPs). Many of the OSNs' funds have been managed by unions associated with particular industries. The majority of them did not have their own facilities, so they paid for care provided in public and mostly in private institutions. Unlike most countries in the region, the Argentine social health insurance has never been merged into a unified, national institute of social security. Instead, Argentina has been upholding a “corporatist atomized private model” that severed different occupational groups into exclusive, quasi-noncompetitive sickness funds.

Private health insurance first emerged in Argentina in the late 1960s. The industry has comprised for-profit (Prepagas) and not-for-profit (Mutuales) sectors that have provided voluntary plans mainly for high-income groups, supplementing what they have been obliged to take with obras sociales. With no regulatory framework, insurers have varied tremendously in terms of their size and the degree to which they relied on third-party providers; Prepagas have been largely criticized for high operating costs and lack of transparency.

Following a grave economic crisis in the 1980s, Argentina faced profound changes to its economic structure and political system, since the crisis brought forth a marked drop in production, accelerated inflation, and high unemployment. The health sector was reformed in the 1990s, putting in motion managed care and market-oriented policies, and setting the stage for decentralization, self-management of the tax-funded health sector, and restructuring the OSNs.

By the 1990s, Argentina's publicly funded health sector suffered from decades of underfunding to the detriment of quality. Aside from holding back the publicly funded sector, the obras sociales were able to seize substantial resources from it. Both obras sociales and private insurers were permitted to send their affiliates to public hospitals, but in principle had to pay for these services. Large numbers of affiliates used the public sector particularly for more expensive and complex treatments as low-wage earners could not afford the copayments charged by their insurer. However, hospitals were rarely reimbursed. This lack of indemnity (a large free-riders effect) further reduced the resources available for uninsured Argentines. Furthermore, almost all parts of the health system suffered from a heavy bias toward expensive curative services by specialists and overlooked more basic interventions and therapies. Basic health services including prevention, education, and promotion were mainly the responsibility of provincial health ministries and were frequently underdeveloped compared with other countries in the region. Access to basic services was near universal, but quality was often extremely poor due to fragmentation and inefficiency; many public health clinics in Buenos Aires lacked access to piped water.

Between 1993 and 2002, Argentina implemented a number of health reforms mainly through decrees. The first step of the reform in 1993 was to allow competition among OSNs. From 1993 on, the OSNs' monopolistic rights over the formal labor force of each sector have been terminated, and workers opt for an OSN of their choice. From now on, OSNs would have to compete with one another for members, but membership continued to be compulsory for formal workers and their dependents. Public hospitals were given greater financial and managerial autonomy and were no longer financed through global budgets, but rather have been paid for services actually provided. They were allowed to recover costs from either health care insurers (that is, private and social health insurances) or those individuals who could afford to pay. These measures aimed to improve efficiency and resource allocation by subsidizing the demand for services rather than their supply.

The Solidarity Redistribution Fund (FSR) was modified in 1995. That fund has been collecting a percentage of all contributions, redistributing it to OSNs whose members have not reached a minimum level of contributions. In the past, those transfers were discretionary, so the FSR did not manage to shift funds from poorer to wealthier funds. Henceforth, the FSR was to function on the basis of preestablished criteria. The FSR has been an income adjuster rather than a risk adjuster, because it has equalized different income levels of OSNs independently of the risks of their affiliates. In 1996, a standard benefits package was introduced, the Obligatory Medical Program (PMO), to be provided by the OSNs and private health insurance. Following the economic crisis, by 2002 few modifications had taken place: The Emergency Obligatory Medical Program (PMOE) superseded the PMO. In addition, the National Policy on Medicines was implemented. The latter included the law entitled “campaign for the utilization of generic name medication” and public provision of basic medicines through a program called Remediar.

Outcomes fell short of expectations as self-managed public hospitals have waived indemnities and reimbursements, since OSNs and other third parties staved off payments. In 2002, between 20 percent and 30 percent of those who used public hospitals had some form of formal coverage; notwithstanding, cost recovery accounted for just 3.5 percent of the budget of provincial hospitals. Due to the economic crisis, devaluation, and skyrocketing costs of imported drugs, 160 OSNs covering 86 percent of the system affiliates could not assure the PMO to their affiliates. The PMO was therefore replaced by the PMOE, aiming to prioritize basic services in light of the economic situation. Some services were suspended and copayments for pharmaceuticals grew from 40 percent to 60 percent. The Remediar program was successful in providing basic drugs to those more vulnerable while strengthening primary health care. The program has not only dispensed free basic drugs, but also rendered medical consultations in primary health care centers (CAPS), all of which have been, unlike before, strictly free of charge. From 1997 to 2002, 9 percent of the population lost its social health insurance primarily because of unemployment or informal occupation.

During the crisis's apogee (2001–2002) indigent patients refrained altogether or held up whenever they felt they needed health care. Hopes to consolidate and streamline social health insurance turned out to be forlorn. It was expected that after several years of competition, the number of funds would decline from over 300 to fewer than 50, since small, uneconomical, and poorly managed funds would vanish. The sector remained highly fragmented, and the number of funds was barely reduced (from 312 in 1993 to 268 in 2003). Despite its considerable growth during the 1990s, the private sector has remained badly regulated both as a provider and as an insurer; the bill that aimed for stricter regulation has never been ratified by congress or enacted as a law.

Argentina has not taken on advanced scientific tools and methods (such as Economic Evaluations) to ration health services. Rather than apply rigorous, evidence-based criteria, decision makers allocate resources quite poorly based on institutions, precedents from prior decisions, and social pressures plus fears of litigation.

Australia: A Public-Private Seesaw

When the federal health department was established in 1921, it had a mandate to cooperate with the states in health matters (information in this section from Philippon and Braithwaite, 2008; Healy and others, 2006; Palmer and Short, 2000; Hall and others, 1999; Savage and Lu, 2007). In 1946, a constitutional amendment enlarged the role of the commonwealth (the federal Australian government) further to include health policy. The commonwealth has been vested with the right to make laws affecting health, including the provision of pharmaceutical, sickness, and hospital benefits, and medical and dental services. The commonwealth became the dominant player on matters pertaining to physicians and pharmaceuticals; hospital matters including arrangements with medical staff and nongeneral practice in the community remained in state hands.

In subsequent years, the postwar health care system consolidated. Under the Pharmaceutical Benefit Act of 1950, the commonwealth subsidized drugs, and medical services were added under the National Health Act 1953. Health insurance premiums have been community rated (that is, everyone pays the same premium for the same product, irrespective of their risk or previous claims) since 1953 to ensure that private insurance is within reach of all regardless of their risk.

Medibank, a national health insurance plan, was introduced in 1975 after a political debate; the Health Insurance Commission was established to administer the plan. The Liberal-led coalition government (1975–1983) made a series of changes to Medibank: Individuals could opt out of Medibank and purchase private health insurance, or pay a levy of 2.5 percent of taxable income to remain in the plan. By 1981, a significant proportion of the population was not effectively insured for hospital treatment. Public funding for health care, principally for public hospitals, continued to be negotiated periodically between the commonwealth and the states.

Medicare superseded Medibank in 1984. This unique plan among developed nations combined universal health insurance financed through taxes and a component of private health insurance. In 1970, prior to introducing universal public insurance, 80 percent of the population held private health insurance. Coverage has since declined, dropping to 50 percent in 1984 when Medicare was first introduced, and further backsliding to 30 percent toward the end of the 1990s.

Medicare, the tax-funded national health insurance plan, has offered patients subsidized access to their doctor of choice for out-of-hospital care and subsidized pharmaceuticals. All permanent residents of Australia have been entitled to free treatment in a public hospital under Medicare. Inpatient medical services have been provided for public patients by salaried (public) hospital doctors; patients have not been billed for such services. Patients with private insurance have had the right to be treated as public patients, thereby avoiding both hospital charges and drawing on any private entitlements. The structure of the financing system has borne heavily on privately insured Australians. Not only have they been facing rising premium costs, but when treated as private patients they have found themselves facing large, often unpredicted out-of-pocket expenses. Insurance has not covered the difference between scheduled medical fees and actual fees charged. Moreover, privately insured patients had often been treated in a public hospital, side by side with public patients—same accommodation, same nursing staff, and same doctors, but afterward received a pile of bills without being fully recompensed by their insurance, whereas the public patient never saw a bill.

The Australian government introduced financial incentives in July 1997 to encourage Australians to acquire private insurance in order to curtail public usage of public hospitals. Accusing wealthy households without insurance of free riding on Medicare (30 percent of wealthy households had been uninsured when the measures were foreshadowed), positive incentives for low-income earners were juxtaposed with financial penalties for high-income groups. For example, singles earning less than $35,000 per annum and couples earning less than $70,000 per annum were eligible for a rebate on private insurance premiums. The income threshold for families was raised by $3,000 for each dependent child in that family. High-income earners such as singles earning above $50,000 or couples earning above $100,000 without private insurance were charged an income tax surcharge of 1 percent. These incentives, however, did not curb cost pressures in the system or address large out-of-pocket costs that had made private insurance unpopular. From January 1999, a nonmeans-tested 30 percent tax rebate offered to those taking out private health insurance replaced former subsidies. In 2005, the rebate was increased to 40 percent for people aged 70 and older, and to 35 percent for those aged 65 to 69.

In 2000, “Lifetime Health Cover” impelled individuals to obtain health insurance while still young and retain it by allowing premium levels to rise with the age at which a person joins a health fund. The base insurance premium applied to anyone taking out health insurance up to the age of 30. These people were paying base premiums as long as they remained insured, although it did not protect against an increase in the base premium over time. The premium paid increased by 2 percent for each year the individual was over 30 at the time of first joining an insurance fund, with a ceiling at 70 percent above the base premium.

Savage and Lu (2007) found that privately insured people had used private hospitals more but not at the expense of public ones, so private insurance did not relieve the burden on public hospitals. Financial incentives did bear fruit as within half a decade, private coverage mounted from 30 percent to 43 percent of the population.

The Australian health system has been unique in terms of extensive public-private and state-commonwealth interaction. The commonwealth has subsidized a substantial proportion of all expenditure on health care such as on services rendered by doctors in private practice, by private nursing homes, or for pharmaceuticals provided by the private sector. Although the commonwealth has paid the lion's share of doctors' fees in private practices and for medical education, all aspects of legal control of the profession, including registration and setting fees, reside in the states. The registration of private nursing homes has been vested in the states, but the commonwealth has accounted for a vast part of their income through subsidies. The commonwealth financing services and the states administering them have given rise to inevitable tensions. States' funding for health care has come from their share of tax revenues in goods and services produced under their jurisdiction, block grants and specific purpose payments from the Australian government, funding out of their own fiscal resources, and funding provided by nongovernment sources (usually user fees). Within the state budgetary process, the health portfolio has accounted for up to 40 percent of recurrent funds. Commonwealth health grants to the states under the Australian Health Care Agreements have been based on a population formula plus components of performance measurement. First formalized in the 1984–1988 agreement, the Australian Health Care Agreements (funding mainly for public hospitals) have been negotiated every five years between the commonwealth and state governments. The commonwealth has accommodated the states with capped block grants that the states generally have regarded as insufficient to cover hospital costs. The agreements set out a number of conditions and performance indicators, including service targets, but allowed the states flexibility over allocating resources to hospitals. In turn, states have been compelled to provide free treatment in public hospitals to all eligible persons. The commonwealth could deduct health grants for states from other revenues it owed the states. Some grants have been subject to “fiscal equalization” to ensure that all states are able to provide an adequate level of services without levying higher taxes or surcharges upon their citizens, so in effect the poorer states are cross subsidized by the richer states.

The Australian government ventures perennially to improve health care quality, outcomes, and access (AIHW, 2008), inaugurating, for example, a program to monitor general practice in April 1998. The Bettering the Evaluation and Care of Health (BEACH) program took on to collect new samples of data from about 1,000 GPs each year, monitoring eight health conditions declared as National Health Priority Areas by both the federal government and states. BEACH has undertaken to increase certain GP payments and reduce patient costs, create payment incentives through the Medicare plan to boost GP to better manage chronic diseases, and foster preventive health checks among at-risk groups. Australia ranked high in international measurements that compare the health of populations and has constantly improved in many areas. Every year, Australia has spent more on health, even after allowing for inflation: Over the past decade, its spending grew from 7.9 percent to 9.4 percent of all spending on goods and services (AIHW, 2012).

Education

Education is the primary means through which people are socialized into their society and prepared for a productive work life. It can be predominantly provided by public actors (as, for instance, in France and the United States), or by a mix of public and private actors (as, for instance, in the Netherlands). It can be centrally controlled (as in France), or it can be more decentralized (as in Britain and the United States). In the past four to five decades education has been subjected to reforms from the primary up to the higher education levels.

Wisconsin: Vouchering Together—Unlikely Allies in Milwaukee

In 1962, Milton Friedman, a conservative University of Chicago economist, argued that the government should subsidize education but not operate schools and advocated a relatively unregulated voucher plan (information in this section from Fowler, 2003; Hoxby, 2000, 2007; Belfield and Levin, 2002; Greene and others, 1998; Rouse, 1998; Farrell and Mathews, 2006; Waggoner, 1996; Witte and others, 2012). In the 1970s, most children in the United States attended the public school to which they had been zoned by their school board; parental choice and experiments with vouchers were sporadic and sparse. Almost two decades later, increased parental choice was the main driving force behind reforms in elementary and secondary schooling. Reforms had to do with intradistrict choice, interdistrict choice, and vouchers for private and/or charter schools. Charter schools are public schools that operate under a negotiated charter and are financially supported by a fair share of state funding that their students would have received if they had attended a school in their home district; admission cannot be selective. Many economists believe that market competition improves both technical and allocative efficiency in the use of resources, since suppliers must strive to be efficient and consumers have more choices. School choice proponents do not favor government-operated monopolies and blame them for the system's problems. The premise that private schools are more efficient than public schools lies at the bottom of many reforms.

A late 1970s policy paper distributed by the Heritage Foundation during the Reagan presidential campaign suggested funding vouchers for private schools with public dollars. Hinging upon urban decline in terms of public education, vouchers would enable inner-city parents to accommodate their children with high-quality private education long enjoyed by those who were predominantly middle class and white. Milwaukee Public Schools (MPS) serve the impoverished population of minority students who consistently perform well below their counterparts in other areas of Wisconsin. In January 1976, federal Judge John Reynolds ruled that the Milwaukee Public Schools were unlawfully segregated. MPS responded with a program to integrate the public schools and improve educational achievement. From 1973 to 1993, real spending per pupil increased by 82 percent, but the graduation rate dropped from 79 percent of each freshman class to merely 44 percent. Rising costs and falling test scores impelled Wisconsin Governor Anthony Earl and Superintendent of Public Instruction Herbert Grover to commission an independent inquiry in 1984 to render a comprehensive review of public education in the region. The Study Commission found alarming disparities between low-income students and students from middle- to upper-income families. The report portrayed many MPS schools as ineffective, bankrupt institutions in terms of test scores and dropout rates. For years, MPS had informed the public that a majority of students were performing near the national average. The widely reported conclusion concealed the since-discarded definition of “average” that included students with scores as low as the twenty-third percentile nationwide. A 1990 research project revealed even more disturbing results regarding MPS in general and minority students' scores in particular.

In the late 1980s, Annette “Polly” Williams, a Democratic Wisconsin representative, a fiery African American, and former welfare recipient, aligned with conservatives around the nation when she called for choice in Milwaukee's public schools. This unique alliance between middle-class white conservatives and more liberal, inner-city minorities was strengthened by African American parents' disenchantment with the educational promise of school desegregation. After more than a quarter century, school desegregation seemed unresponsive to complex educational and social needs of those black students it was supposed to help. The time was ripe for “the Rosa Parks of School Choice” to convince Governor Thompson and Wisconsin to set in motion a voucher system.

The Milwaukee Parental Choice Program (MPCP) was enacted on April 27, 1990, by the Wisconsin Legislature as part of a larger budget bill. With the adoption of the MPCP, Wisconsin became the first state in the nation to implement a parental choice program involving the use of private schools as an alternative to public schools. From the 1990–1991 school year, qualifying Milwaukee residents could choose among three options for their children: neighborhood public schools, public magnet schools, and nonsectarian private schools. Only private sectarian schools were excluded, but not for long. In 1995, 100 low-income families stepped into the precarious terrain of state versus religion and filed a lawsuit in the federal courts. Demanding to include parochial schools in the choice program, parents asserted that excluding such schools deprived them of their right to free exercise of religion in violation of the First Amendment and equal protection under the law in violation of the Fourteenth Amendment of the U.S. Constitution. Courts finally yielded to the parents, and private sectarian schools including religious ones were added to the MPCP.

The superintendent of public instruction administered the program. Upon receiving proof of a student's enrollment, he was required to pay the private school with funds that would otherwise go to the public school district. The statute also obligated the superintendent to ensure that Milwaukee citizens were informed annually of participating schools, so that students could meet application deadlines. The legislation limited eligibility to students whose family income did not exceed 1.75 times the federal poverty line. To qualify and remain in the program, a participating school was prohibited from discriminating on the basis of race, color, or national origin and had to accept applicants on a random basis. Schools were allowed to charge no additional tuition besides the voucher and obligated to supply certain information to the superintendent of public instruction.

Apart from expanding educational choices for low-income students, the program's proponents envisioned that it would engender educational success owing to competition between the public and private educational sectors while spurring thrift and innovation. Since its inception, the program has resonated throughout the nation; many research projects broached parental choice. It has been found that most Milwaukee parents and students were pleased with the program. Contrariwise, findings concerning academic achievement have been inconclusive at best, and altogether contradicting at worst. In carrying out his statutory responsibility for evaluating the program, Grover, the superintendent of public instruction, appointed John Witte (a professor of political science and public affairs at Robert M. La Follette School of Public Affairs at the University of Wisconsin-Madison) to conduct research in strict compliance with the statute. He found that choice students remained approximately equal to low-income students in MPS; there were no significant gains to be found in terms of test scores amid MPCP students (referenced in Waggoner, 1996). Witte's conclusions were castigated on the grounds that too few students participated in the MPCP and too little time has elapsed to draw any meaningful conclusions about academic performance, so the evaluation had been “biased against finding choice schools effective” (Waggoner, 1996, p. 178). Since students who took part in the program possessed certain characteristics as well as those who dropped out of it, it was impossible to randomize selection, not to mention that many children switched between public and private schools during research. Factors impacting academic achievement (such as family background, residential area, and so forth) are intricate and impossible to disentangle from that of the school's influence alone. After controlling for variables, which may confound results, Greene and others (1998) have found positive and significant results for math and reading for students taking part in the program for three to four years and concluded that choosing private school does enhance test scores and improve academic achievement because choice allegedly enables a closer match between school qualities and students' needs. It may well have been that magnet schools in Milwaukee had adapted and become more responsive in order to lure students back to the public schools and gain funds equivalent to the size of their vouchers.

Hoxby (2000; 2007) was criticized for her dataset, but still held on to her claim that increased market pressures on public schools had improved achievement and had lowered costs because parents better matched their children with schools. Rouse (1998) juxtaposed students in the choice schools with students from the Milwaukee public schools and found that students in the MPCP had gained rapidly in terms of math scores, but not in terms of reading scores. Belfield and Levin (2002) conducted metaresearch by surveying 41 empirical studies dealing with school choice and professed that enhanced competition does bring about positive gains, though modest in scope. More recently, a team headed by John Witte, the first researcher who had been appointed to evaluate the program in 1990, has found that students who had participated in the MPCP in the 2006–2007 school year had higher reading achievement in the 2010–2011 school year (with an effect size of 0.15). Conversely, there was no difference in math achievement.

Farrell and Mathews (2006) portray a gloomy picture of “alarming deficiencies” demonstrated by choice schools, some of which lack ability, knowledge, skills (namely, resort to unqualified teachers and administrators), or even will to educate their low-income, principally black students. For many schools, state voucher payments are their only source of income. Sometimes, staff paychecks are not issued until the voucher checks arrive. Lenient legislation allowed anyone capable of meeting a few building code requirements and securing an occupancy license to open a school. Vouchers have therefore preserved existing private schools and triggered new, low-quality ones.

Despite inconclusive evidence, vouchers have become more and more popular; they are seen as a panacea for public education maladies. Once the choice show has gotten on the road, the voucher wagon can no longer be diverted from its route. Across the nation, champions of both the private sector and reform have used the MPCP as their operational model. Following stinging defeats of voucher referendums in Michigan and California for the second time in 2000, proponents of vouchers have focused attention on convincing individual legislators of the efficacy of publicly funded vouchers, rather than relying on a majority of voters to move the school voucher initiative forward at the state level. At the federal level, vouchers were approved in September 2005 for students whose education had been disrupted by Hurricane Katrina. The MPCP has been cited in campaigns for voucher bills in Virginia, New Mexico, New Jersey, New York, Georgia, and a number of other states. The ally of inner city black liberals and white middle-class conservatives has conceived of vouchers as the educational salvation of low-income minority students.

France: Contractualization au Courant—Le Central Unleashes Higher Education

Since the early nineteenth century, the higher education system in France has been mostly in the public sector. From 1885 on, 15 universities were established as places for giving specific teaching to students. These universities were collections of five faculties (the same in all universities: theology, law, medicine, arts, and science) headed by a university council vested with no actual power and a president appointed by the central government of the day. Since the grandes écoles had already been training elites efficiently and prestigiously, these universities struggled with their role in higher education. This struggle intensified in the 1960 with the beginning of mass higher education (information in this section from Chevaillier, 1998, 2001; Kaiser, 2007; Deer, 2002; De Meulemeester, 2003; Daun, 2004).

The USSR-like central planning persisted throughout the 1950s and 1960s, as it was deemed efficient for the postwar reconstruction of economic infrastructure. The economy grew more complex alongside a rapid change in the way of producing goods and services. Since the needs of the population could no longer be easily foreseen, central planning was rendered a rigid and wasteful form of organization.

A vibrant student movement inspired Edgar Fauré's framework law of 1968 that was the basis of university organization for many years. The act provided for funding of institutions through block grants. Structural changes started to counteract the inability of central administration to run a system undergoing diversification and differentiation. The number of students grew when France, earlier than any other European nation, moved rapidly from elite to mass higher education. The changing terrain of higher education encompassed a large number of institutions distributed more evenly across the national territory forging new links with local business and local authorities. With more young people in need of education and training for different types of jobs, students and other stakeholders put great pressure on universities to diversify and create new programs. The control of the state on the curriculum by accrediting programs became more complex.

The most peculiar feature of French higher education has been a strong public research sector outside the universities alongside an elite sector of vocationally oriented, specialized institutions. Since the 1970s, the links between the universities and the research sector have grown, and the 1968 law gave the university professoriate the status of enseignant-chercheurs (“teacher-researchers”), stressing their research vocation.

In 1976, a new allocation method supplanted block grants. Under GARACES (the acronym for Committee on Analysis and Research on Activities and Costs in Higher Education) funds were allocated to a university for general and teaching activities, and were directly linked to the buildings used for teaching and the overall teaching load. GARACES bore heavily on academic programs: Since new programs brought more funds once they were accredited by the ministry, hundreds of new programs, often well designed and popular with students, were developed. When it came to existing teaching programs, academic pressure groups (especially disciplinary groups) managed to mandate new subjects and lengthen the duration of courses.

In 1983, a major reform transferred a large share of responsibilities and fiscal resources from central government to local authorities. Decentralization laws gave rise to a tripartite contractual regime involving the respective central authority, the regional authorities, and the universities and/or central institutions, setting the stage for large-scale four-year contracts yet to come. Regional and local authorities have been officially goaded into a more active role in financing and steering higher education and research at their own territorial level. Regional councils have financed and developed universities and higher schools, funding investment in equipment, new buildings, and current expenditures related to selected programs designed to meet the training needs of the local industry.

The National Assessment Council (Conseil National d'Evaluation) that had been forged in 1985 failed to engender real revolutionary changes. In 1986, an attempt to impose free-market values on French higher education by right-wing politicians impressed by Reagan in the United States and Thatcher in Britain fell flat. The delegate minister of higher education, Alain Devaquet, ventured to infuse a solid dose of decentralization into universities and render them more autonomous in terms of their management—in French terms, régionalisation. Those attempts to enhance accountability through strict evaluation procedures had jeopardized vested interests and culminated in political protest in which 600,000 fervent students and teachers swarmed the streets of Paris in early December 1986. Yielding to the clamor for keeping universities a free good, openly accessible, and essentially public, Devaquet resigned and the bill was dropped.

Lionel Jospin, the new minister of education, and his special adviser Claude Allègre ushered in a novelty in 1988: a contractual mechanism. Reorganizing central services pertaining to their administration, universities no longer received automatic funding corresponding to the number of students, having instead to justify their financial needs through a contractual mechanism between the central state and the president of the university. Those four-year agreements were applied both to the universities and to the CNRS (the French Research Council). Inside universities, decisions were made regarding the objectives to be attained and how much financial support they necessitated. Once a plan had been agreed upon, the president of the university bargained directly with the Ministry of Education. For some people, those contractual relationships demarcate a tipping point from which French universities have been born as independent and self-reflective bodies.

A new National Assessment Committee (Comité National d'Evaluation) initiated self-assessment exercises in order to lead institutions to evaluate themselves. New evaluation exercises were carried out at the level of a discipline or by regions. The results of such evaluations had only institutional impacts with no personal ramifications, which somewhat reduced their incentive nature. The contractual policy spawned a new type of manager, as a new generation of university presidents, faculty deans, and heads of department developed plans and negotiated with the ministry. The ministry arranged for university administrators to be trained in managerial techniques. From 1983 onwards, decentralization, devolution (in its French version of regionalization), privatization, and greater financial autonomy coincided with similar occurrences in secondary schools.

In the late 1980s and early 1990s, enrollment in higher education spiked; the minister of education heralded the plan U2000 in 1990, inter alia to accommodate the wave of new students. This plan prompted central government and local authorities to coinvest in infrastructure for higher education for the sake of geographical coverage. The IUTs (Institut universitaire de technologie—the university institutes of technology) have been expanding the most. Although U2000 was deemed successful, the issue of research and the problematic situation in the Paris region have not been properly addressed.

The LOLF (Loi organique relative aux lois de finances), the law on the new public budgeting and accounting system, was introduced in 2001 but came in force for the higher education sector on January 1, 2006. Venturing to improve the transparency of the budgeting system and to make it more performance based, the new LOLF has rendered more information on the performance of the public sector to enable parliament to better formulate strategic policies and set priorities. Under LOLF, the agency responsible for a program has had to write an annual performance plan (PAP) to set forth objectives and performance indicators for the program. As an input for this PAP, higher education institutions have had to lay down indicators and deliver information regarding their attainment. A contract must subsume a precise and operationalized list of indicators in keeping with national objectives.

Since 2007, education and research have been split up into two ministries. The Ministry of National Education comprises only education up to secondary education. Higher education and research have been placed in the Ministry of Higher Education and Research. The performance of the French higher education system has been reckoned quite mediocre according to international rankings and comparisons such as the Shanghai ranking and the scores in Organization for Economic Cooperation and Development (OECD) states like Education at a Glance. Some blame the binary dual structure of universities and grandes écoles, but the low position in the rankings can be explained by the fact that universities and grandes écoles cover only part of the indicators used (prestigious teachers, excellent research, and so forth). The lack of multidisciplinarity, the small size of grandes écoles, and poor funding may account for the inferior ranking.

The People's Republic of China: Between a Soviet Rock and a Western Hard Place—Higher Education on the Horns of a Dilemma

China experienced dramatic changes to its social systems throughout the twentieth century while Western patterns interweaved with those Soviet models on the stage of China's political and educational scenario (information in this section from Pepper, 1996; Xu, 2005; Law, 1995; Kang, 2004; World Bank, 1997; Ma, 2003; Zhong, 2007; Li and Zhang, 2010). The Western influence started in 1905 when Chinese students, who rushed to the United States, Japan, and France after civil service examinations had been abolished, came back. Those returning from overseas studies imitated mechanically foreign educational institutions, including curricula, textbooks, and teaching methods. The Soviet model of mass education forced its way from 1921 when the first institutions were set up in Hunan by Mao Zhedong and his friends. During the Yanan period (1934–1946), the ideology that embraced a modernized system of elite education vied with the practical theory of mass education for domination.

By 1949, the Chinese Communist Party (CCP) presided over not just an isolated rural hinterland but also the cities, coastal areas, and the South. In the early 1950s, the Soviet Union served as an ally, adviser, and inspiration for socioeconomic revolution. The pre-1949 republican system of higher education was aligned with the Soviet tripartite system to educate scientists and technologists for the sake of a new socialist China. The division of labor between higher education institutes in training different types of specialists was built on three basic types of institutes: comprehensive universities (responsible for education in the natural sciences, humanities, and social sciences), single-faculty institutes, and multifaculty institutes. The main purpose of this reform was to reduce the number of comprehensive universities and the branch of humanities and social sciences, but to increase the number of colleges related to the planned economy and applied subjects such as polytechnics, moral teaching, medicine, agriculture, politics, finance, economics, and so on. All rival forces such as market and religion that the CCP perceived to be associated with capitalism were eliminated or nationalized and their ownership and administrative powers were transferred to the state.

To balance quality and quantity in districts across the country, many universities moved from the eastern regions near coastal areas to the western regions and the internal continental part of the country. After departing Americans and Europeans were superseded by Russians sent from the Soviet Union, 10,000 or more Soviet “experts” served in China during the 1950s, 700 of whom worked in higher education. The seductive Chinese bureaucracy has always been able to co-opt outsiders and alien dynasties. Criteria for enrollment in a course were foreordained to assimilate revolutionary sons and daughters into the rarefied world of China's intellectual elite based on key schools and family background. Rigid centralization served as a means to limit tertiary institutes', teachers', and students' deviation from plans and procedures outlined by the state. The Ministry of Education established departments and institutes, appointed the chief university executives and teaching staff, administered access to higher education, curricula, textbooks, and teaching references, and allocated material and human resources.

In 1966, Mao instigated a revolution, endeavoring to consolidate what had already been achieved in the economic base. As long as it was regarded as the main area controlled by revisionism and the bourgeois, the education system came under attack. During the Great Cultural Revolution (1966–1976), all formal education in China was stopped. Higher education existed in name only since all students in the middle schools and universities became “red guards” and joined the revolution while being denied any opportunity to study in formal education. Leaders of universities were eliminated from their positions. Almost everything traditional (Chinese) or Westernized was overthrown or replaced. Although the Chinese pendulum had been swinging between the West and the Soviet Union and between centralization and decentralization, all reforms were top down, initiated and ended by the central government. Embedded in a socialist society, education was owned by the government, funded by the government, and reformed by the government.

In the late 1970s, modernization through economic revitalization became paramount in tandem with Deng Xiaoping's leadership that opened up a new era of reform. By 1978, the Soviets' stronghold had dwindled. An important national conference for science and education brought economic modernization to the fore, laying emphasis on agriculture, industry, national defense, and science and technology. Deng Xiaoping believed that talented, highly trained personnel would spearhead modernization. Education reform came to the fore over the 1980s, attempting to advance international standards, modernize the socialist society, and form a virtuous cycle. The State Ministry of Education was changed to the State Education Commission in order to strengthen the ties between the central government and the education sector.

The CCP's Fourteenth Congress of 1985 endorsed economic reform to pave the way for a socialist market economy. The Chinese Communist Party Central Committee on the Reform of the Educational System advocated decentralization. Responsibility for public services was devolved to lower levels of government. Aside from a three-level management system for schools at the central, provincial, and major municipal levels, universities were given new powers, particularly with regard to the content and methods of teaching, and could even set up new programs and even new local institutions at the short-cycle level and in adult education. As higher education has become subject to the socialist economy market, legislative status was granted to universities and colleges, so they resorted to various ways to complement their income, since financial allocations from central, provincial, and the states' governments were insufficient. The reform had cast a plan to separate university administration from local party power, but it was suspended following the Tiananmen Square incident in 1989. In 1985, only about 10 percent of higher education institutions were chosen to test this pilot plan, which by 1993 was formally abrogated by the Central Committee. University presidents were installed by the state in spite of the fact that many department heads were elected.

The reform encouraged all sectors of society, including enterprises, institutions, public organizations or groups, as well as individual citizens, to run higher education institutions lawfully and to participate in and support the reform and development of higher education. The gist of the reform was twofold: transition from control to guidance in governance and shifting from full governmental funding to plural sources of funding. In the wake of progress toward mass education, Chinese higher education was no longer regarded as a political weapon but rather one of educating and cultivating. Those profound undertakings set the stage for future developments, which soon followed.

Throughout the 1990s, the Chinese government came to grips with issues of expansion versus quality. Since then, global attention to quality issues in higher education has reached an unprecedented level, as countries recognize the correlation between educational quality and economic growth, the need for greater accountability in times of declining resources, and the demands for systems to deliver value for money. After the central government had been reorganized, the State Education Commission was changed back to the State Ministry of Education. China has been extending enrollment in higher education since the beginning of the 1990s to match the student–teacher ratio of advanced countries and to construct a higher level of intellectual structure. In 1998, the Ministry of Education came up with “The Plan for Revitalizing Education in the Twenty-First Century” to accelerate the development of education, planning to reach a gross enrollment rate of 15 percent by the year 2010. The goal was reached in 2002, eight years ahead of schedule, so within five years China reached the beginning stage of mass higher education.

On December 11, 2001, when China was finally admitted as a full member to the World Trade Organization (WTO), it also meant entering the international education market. The intrusion of foreign institutions, especially those from developed countries, posed new challenges, since they might have taken away more student resources. China's universities and colleges were compelled to find ways to measure up to international standards and survive competition. Communicating in languages other than Chinese, particularly in English, became cardinal. English had been tested in the national entrance examination ever since the open-door policy of the late 1970s, but only a certain percentage of the points had been added to the total mark; by the mid-1990s, English became one of three key subjects (in addition to math and Chinese) in entrance examinations. Those lucky students who managed to enter university practiced enhanced English studies and had to pass an English test before graduation.

In the mid-1990s, the government launched the “211” project to erect 100 first-rate universities in the twenty-first century. At the end of the 1990s, the “985” project endeavored to build several world-class research universities. Two major measures were taken to guarantee the quality of higher education in the process of decentralization. Under the direct administration of the ministry, all universities must apply yearly for approval of the number of students to be admitted. Other institutions apply to the provincial government for the quota, and the total of the province must be approved by the state government, so gross enrollment rates are commensurate with the increase in real GDP. Graduates have no longer been enjoying government-assigned jobs through a central placement system, as was the case up until the late 1990s.

In 2000, the ministry appointed a specialist group to work out a program to evaluate undergraduate education. In 2002, the newly forged evaluation system was put in motion; it has encompassed 44 points of observation covering all aspects of undergraduate education such as infrastructure, facilities, teaching staff, administration, teacher performance, student discipline, student abilities, and so forth. Following a preliminary assessment of all existing universities and colleges, all institutions of higher learning are supposed to be appraised every five years. Successive governments have tried various means to develop a Chinese way of reform, combining socialist ideology and capitalist practice. The main concern remains the vast regional disparities; as China seizes opportunities offered by globalization, it struggles with daunting gaps between regions and between institutions that seem to grow larger and larger.

Britain: New Right, New ERA, Old Cleavages

Universal schooling began in England and Wales in 1870 when the Education Act established the basis for a state system of elementary education (a parallel law was enacted in Scotland in 1872) (unless stated otherwise, information in this section from Baldi, 2010; Sanderson, 1999; Timmins, 1996; Glennerster, 1998; Wolf, 2010; Blanden and others, 2005; Hlavac, 2007). The local government school boards (1870–1902) had raised rates to build schools and enforced attendance, which became nationally compulsory for the ages 5–10 and free-of-charge from 1891. Britain could no longer draw solely from the same narrow, traditional class to provide leadership in government and empire, so the state assisted and coerced poor parents to formally educate their children. At the turn of the century, concerns about competitiveness mounted as Britain declined from former economic world primacy while German and American goods swept the country. Education was also said to reduce juvenile delinquency.

The 1902 Education Act enabled county councils and county borough councils to set up Local Education Authorities (LEAs). Such committees administered schools within their jurisdiction and designed academic curriculums. The act consolidated the system into a three-tiered structure. The first tier consisted of universal, compulsory, and free elementary schools for young people between the ages 5 to 13. The second tier included voluntary, fee-charging secondary schools known as grammar schools into which students were transferred around age 11. Selection to these schools was primarily based on competitive examinations at age 11 (the 11-plus). Grammar schools provided advanced instruction in liberal arts for students until age 16, preparing them for university and/or professional careers. The third tier of the tripartite system, the junior technical schools, never educated more than 4 percent of schoolchildren; their entry age of 13 was at odds with other schools that admitted at 11, and psychologists cast doubt whether the skill aptitudes for which the schools catered were detectable. With students separated into different classes of schools around age 11, this system was clearly organized on the basis of early selection, though it was students' family wealth or social standing (not past academic performance or perceived intellectual ability) that determined whether they would ascend to the academic track. Secondary schools were dominated by the middle classes, while the overwhelming majority of working-class students completed their entire education in a single elementary school before leaving at the minimum leaving age to enter the labor market. Many grammar schools, financed directly by the central government and hence independent of the LEA, took a proportion of local, very bright LEA scholarship children. Academically weak offspring of well-to-do parents who failed the 11-plus could pay a fee and buy their way out of secondary modern into grammar schools inappropriate for them. The Fisher Education Act of 1918 raised the school-leaving age to 14.

The Second World War spurred the 1944 Education Act that garnered wide public support. It raised the school-leaving age to 15 while abrogating “all-age” schools, although 20 more years were necessary to carry it through. The Board of Education was upgraded to a ministry, and religious schools (Anglican and Catholics) could secede from LEAs and opt for a “maintained” status, receiving direct grants from central government. Education remained a local, proudly defended terrain. LEAs needed to draw up plans, so the ministry could no longer dictate what should be taught. The ministry could affect the curriculum by means of advice and guidance via circulars and mandate nationwide examinations, but the minister could not lay it down.

From the early 1950s onwards, the tripartite system was under attack as more and more educational sociologists and psychologists, think tanks, and research institutions proclaimed the 11-plus had misallocated children. Such allegations struck a chord with Anthony Crosland, a Labor MP and Oxford professor who would later become secretary of state for education and science under Harold Wilson's governments in the mid-1960s. Crosland envisaged a truly egalitarian system to level British society by providing all young people equal access to life chances.

LEAs had leeway to espouse any form of schooling, so comprehensives had been erected already in the mid-1950s, but Crosland's circular 10/65 entreated LEAs to present plans to abolish the 11-plus and move to some form of comprehensive organization. Within a few years, the number of children in comprehensives soared, overtaking those in grammar schools in 1969, and those in secondary modern schools in 1972. Five years after Crosland, Thatcher's circular 10/70 withdrew Crosland's, telling LEAs they could keep their grammar schools and even open new ones should they wish. Since the tide had already been flowing, Margaret Thatcher was to go down as the secretary of state for education and science (1970–1974) who closed more grammar schools than any other while the number of comprehensives rose more rapidly under her than at any time before or since (Timmins, 1996, p. 298).

In 1974, the incoming Labor government set up the Assessment of Performance Unit (APU) with a remit to produce up-to-date national measures of school performance, and a unit to study ways to improve the attainments of children from deprived backgrounds. In 1976, the same government withdrew financial support from 151 grammar schools, abrogating those grant-maintained schools; middle-class fee payers were forced to bear the full costs of their children's schooling, unsupported by the taxpayer. Local parents with modest pecuniary means no longer had an alternative to the comprehensive.

In 1980, the new Conservative government reversed the Labor's policy and the “assisted places scheme” was enacted through the 1980 Education Act. Selected independent schools could offer places to children and charge reduced fees depending on their parents' income. The government might recoup the schools for the difference. The scheme had been launched in September 1981, offering 5,300 places and later on 35,000 places where it remained until it was scrapped by the New Labor in 1998. Nevertheless, the number of “assisted places” was barely a drop in the ocean compared to nearly 4 million children in secondary schools.

The most notable development since Butler's 1944 Education Act was by far Baker's Education Reform Act of 1988 (ERA), which incorporated 238 clauses, taking 415 new powers to the center (Timmins, 1996, p. 442). For more than a decade, scholars and politicians advocated a national curriculum while pitting against teachers and LEAs. The ERA tipped the scale in favor of a national curriculum that turned Britain from one of the few industrialized countries without some form of national curriculum into one with the most detailed and prescriptive curriculum of them all (Timmins, 1996, p. 443). The ERA had many provisions pertaining to schools as well as to higher education institutions; its main elements were (Glennerster, 1998, p. 34):

  • A national curriculum was to be inaugurated, with a series of levels or attainment targets to be achieved by given ages, and common assessments set for children nationwide at stages 7, 11, 14 and 16—the statutory school-leaving age.
  • Provision was made for schools to opt out of local authority control and to be funded directly from central government.
  • Local authority secondary schools, larger primary schools, and later virtually all schools had to have devolved budgets. The local authority was not able to retain more than 15 percent of the money set aside for schools in its own hands for centrally provided services. The rest had to be devolved to schools and managed by the governors who would include parents, local community representatives, and some teachers.
  • Universities were to be funded by a funding council that would take control over their affairs. Tenure for newly appointed academic staff was abolished.
  • Polytechnics moved from local authority control to become independent institutions under a separate polytechnic funding council.
  • The Inner London Education Authority was abolished, shifting powers to the local boroughs (Glennerster, 1998).

Despite measures to render the opting-out option more favorable, by August 1994, five years into the vociferous launch, the number of grant-maintained schools amounted to barely 1,000, equaling less than 5 percent of all schools. Among secondary schools, 16 percent or one in six of both schools and pupils had been grant maintained (Timmins, 1996, p. 446); halfway through 1996, 100 additional schools opted out. Open enrollment obliged schools to accept students to a calculated number and allow parental choice. Although most schools did not opt out of their LEA, Local Management of Schools (LMS) whereby 85 percent of funds were delegated directly to schools diminished LEA's role and decentralized. Power shifted to central government, schools, and parents, rendering local government the major loser (Sanderson, 1999, p. 139).

Old controversies such as religion and selection reentered the fray when more and more Muslim groups wished to erect their own grant-maintained schools. In 1993, a grant-maintained school in Cumbria became the first grant-maintained comprehensive to go fully selective, joining 150 surviving grammars. Piecemeal, a string of schools around the country moved to select between 30 and 100 percent of their pupils, fearful that if they did not they would lose brighter pupils to rivals (Timmins, 1996, p. 512).

The Education Act of 1992 instituted regular inspections of schools and put pressure on local authorities to maintain standards by threatening to take over failing schools through special government Education Associations. Her Majesty Inspectorate (HMI) for Education was renamed the Office for Standards in Education (OFSTED) and given extra prerogatives to become the locus of a rigorous testing regime with results published in the form of league tables (Jones, 2000, p. 179).

Teachers complained that league tables were statistically crude, and the criteria used even cruder; in 1997, the New Labor government withdrew its past opposition and endorsed league tables regardless. In what The Times (1998) dubbed “the annual festival of middle-class prejudices,” some schools were extolled while others were “named and shamed” for low rates of literacy and numeracy coupled with high rates of truancy. League tables have been accused of ignoring the very different life chances of children in different areas of Britain, since it has been unlikely that “children in areas with Britain's worst social problems would score as well as children from ‘glorious places where people work hard, live clean and pass their exams.’” (Jones, 2000, pp. 203–204)

In 1993, the teachers cowed by the government's frantic reforms back-lashed, boycotting spectacularly what was meant to be the first round of testing for 14-year-olds. A new School Curriculum and Assessment Authority overhauled the whole curriculum and testing regime. The curriculum was simplified and the testing regime was to follow suit. The new meager curriculum left room for extras, allowing vocational courses for the less academic back into schools. In its 18-year reign (1979–1997), the Conservative government undertook to create a quasi market in education (akin to similar measures it took by reorganizing health, community, and other local authorities' services), establishing a voucher system for all intents and purposes without the actual paper. Public funding was supposed to follow the “client” (be it a patient or a pupil) and target chiefly those from modest backgrounds. For the first time in its history, the British education system issued an elaborate and prescriptive mandatory curriculum, not unlike its continental competitors, the most prominent of which are France and Germany. Prima facie, it seems the Conservatives failed to foster social mobility; a recent study reported that intergenerational social mobility had fallen markedly in the UK over time. There was less mobility for the cohort of people born in 1970 compared with the cohort born in 1958. Family income has heavily borne on educational attainment: Students from more affluent families were more likely to stay in education, and hence benefited disproportionately from the expansion of higher education since the late 1980s.

With respect to international standards, the OECD's Program for International Student Assessment (PISA) study of 2000 concluded that English students performed poorly in math, ranking in the middle of an ordering of advanced countries. Yet English schools did well in science results, as English children outperformed those in most European and other English-speaking countries (Glennerster, 1998, p. 68). The central principle of tracking attainments over time vis-à-vis an external standard came to be widely accepted (Glennerster, 1998, p. 31).

Although Britain has a functioning system of school choice in place, choice has been optional only for those who can afford it; even within the state sector some parents can pay for choice with their mortgages by moving to catchment areas with good schools in the suburbs. Children from affluent households can also attend fee-paying schools in the private sector, so the poorest in society get the worst deal. In local authorities, the prevailing surplus places policy, whereby state schools cannot expand and no new schools may be established so long as there are unfilled places at state schools in the area, reduces pressures to compete, so children from disadvantaged families must attend one of the local state schools even if none of them are good. From the 1970s on, recurring projects were geared to those populations where poverty, unemployment, poor skills and education, and decay abound, but problems of deprivation and lack of social mobility have been highly resistant to solution (Jones, 2000, p. 204).

Social Security

One can argue, perhaps, that social security (which really is only economic security) is the invention of twentieth century governments. Piecemeal social security legislation emerged in the second half of the nineteenth century, and from the 1920s on this was incorporated in encompassing social security systems. These paid benefits to retirees, the physically and/or mentally disabled, and the unemployed, to name three major categories of people. In some countries the social security system was extensive and expansive (for instance, in Sweden); in others less so (for instance, the United States). However different the benefit levels may be between countries, since the 1970s social security systems have been in reform in response to increased life expectancy, the retirement of baby boomers, and a sharp increase in the number of senior citizens. Also, where social security was state based and owned (as for instance with pension systems), governments attempted to develop privately managed alternatives.

Chile: Social Security Gone Outright Private

Launching its first national insurance fund in 1924, Chile spearheaded social security in Latin America (information in this section from Arenas de Mesa and Bertranou, 1997; Callund, 1999; Williamson, 2001; Godoy-Arcaya and Valdés-Prieto, 1997; Barr, 2006; Arenas de Mesa and Mesa-Lago, 2006; Mesa-Lago, 2009; Fajnzylber and Robalino, 2012; Barrientos, 1998; Rofman and others, 2008). The plan had initially covered only a few occupational categories and a fraction of the population, but coverage increased piecemeal along the years, encompassing three-quarters of the population by the early 1970s. Pensions were only moderately progressive, and excessive costs led to fiscal deficit.

The 1942 Beveridge Report embodied the zeitgeist of an epoch that witnessed ubiquitous welfare legislation. Following the post-World War II financial disarray, Chile restructured its pension system in 1952, introducing a pay-as-you-go (PAYG) plan under state management. PAYG schemes pay pensions out of current income, whereas fully funded plans pay pensions out of funds built over the years from members' contributions. Most state pension plans are PAYG. Conversely, private plans are funded (though not always adequately).

As the 1970s ebbed away, fertility rates dropped. In the wake of Chile's economic downturn, decisive reform was much needed to develop a productive economy at a time when everybody had a job but few were actively employed to do anything. By the mid-1970s, the Chilean plan could no longer function without huge subsidies out of government revenues, because the system was not generating sufficient revenues to cover pension obligations, even with payroll tax rates that could reach 25 percent. The pension plan was supposed to replace 70 percent of a manual worker's final wage, but by the late 1970s, the replacement rate was closer to 20 percent despite massive government subsidies. There were also serious problems of noncompliance, due in part to very high payroll tax rates. Moreover, the public pension system was fragmented into 35 funds or plans, most of which were financially imbalanced and with significant differences in coverage, entitlement conditions, contributions, and financial status.

Pensions were privatized under a more general effort by General Augusto Pinochet to marketize the Chilean economy. The authoritarian Pinochet regime was able to impose this policy shift despite opposition from a number of groups, including public sector workers, teachers, health workers, academic experts, and union members. In 1979, after the military government had unified the public pension funds, it also raised and equalized the retirement age and level of contributions. In 1980, the public system was closed aside from plans pertaining to the armed forces; in May 1981, a new private system took effect. Chile became the first nation in the world to shift from a public PAYG defined-benefit (DB) pension system to a privatized, defined-contribution (DC) alternative based on individual accounts, market capitalization, and private management.

A short period was given for those insured to choose whether to stay under the regime of the public system or move to the private one. Individuals who switched to the new system were not allowed to reverse their decision and return to their previous plan. Since December 1982, the new system has been the only government-sponsored pension plan available to new entrants into the labor market, so they enrolled in it automatically. The Chilean Personal Pension System has been compulsory for wage earners and salaried employees and optional for the selfemployed. Employers' contributions were eliminated, and workers deposited 10 percent of their income (the defined contribution) in individual accounts managed by private for-profit corporations set up for this sole purpose.

Workers paid commissions to the AFPs (Administradoras de Fondo de Pensiones) for the administration of the old-age program, part of which was transferred to private insurance companies to cover risks associated with disability and survivors (the insured person's dependents). Contributions bought units in the AFP's single-pension fund that invested in a range of permitted assets. The pension fund could only be used to arrange pension benefits at retirement; workers could use their accumulated fund to either purchase a life annuity from an insurance company or agree on a scheduled withdrawal program with their AFP (or a combination thereof).

Chile's move resonated in many countries. The “multi-pillar model” with a mandatory funded component was the flagship of economic measures advocated by the World Bank (WB). Many countries have followed Chile's example and added mandatory contributions to private pension funds alongside contributions to the state's PAYG system, although the Chilean system was hardly flawless.

Many, for instance, pointed to built-in bias against women due to their low-paid occupations and intermittent job record. Under the Chilean personal pension system, workers assumed the investment risk in full, but the government in turn provided a minimum pension guarantee. The level of the minimum pension was set by the government biannually. In the event a worker's accumulated fund had been insufficient to generate a pension benefit at least equal to the minimum pension level, the government would have supplemented the fund to secure it. Affiliates would have qualified for a minimum pension after 20 years of contributions, which could include up to three years of inactivity due to unemployment. In addition to retirement pensions, workers who had paid additional contributions were entitled to disability and survivor pension benefits. Survivor pension benefits were available to female spouses of participants, but not to male spouses. Workers with significant inactivity spells and female employees with participating spouses had very little incentive to join a pension fund. Rigid legislation prevented pension fund administrators from making changes in terms of administration and marketing expenses.

Commissions charged by the AFPs have been deducted from wages, paid by the insured, set freely by the AFPs, and have been of two types: a fixed sum and a variable percentage. Fixed commissions have had regressive effects because they have been proportionally higher for low-income employees (reducing their net deposits in individual accounts, capital returns, and pension levels) than for those with high income. Half the membership in 1997 switched between AFPs after being persuaded by agents hungry for commissions who went so far as to offer “cash back” to people who agreed to switch funds. Administrative costs engendered huge public deficits that exceeded those of the old public system, but when it came to choosing the most convenient AFP, affiliates had received sophisticated and misleading information (4.7 percent of GDP in the 1990s).

In 1999, new regulations thwarted unnecessary movements between AFPs by scaling down fees pocketed by salespersons. Overconcentration began to plague the system. The number of AFPs that had been 12 prior to privatization peaked at 21 in 1994 after allowing trade unions to organize their own AFPs. Since 1995, AFPs had been closing down and merging, so by 2004 their number fell off to only six—half the initial number. Further concentration has been likely because the biggest AFPs have been controlled by foreign corporations that tend to take over or annex competitors. Extraordinary profits over assets alongside lack of entry over a long period attested to insufficient price competition.

Pensions have been financed by accumulated funds, which meant that benefits have been nondefined (ND) and therefore uncertain. Benefits were contingent upon five factors: (a) the amount of contributions deposited in the individual account during the working life of the insured; (b) the capital returns on the investment of the fund in such an account; (c) the life expectancy of the old-age pensioner; (d) his or her gender; and (e) the number, age, and life expectancy of the insured's dependents.

Until 2008, the government offered two publicly funded programs for individuals with low pensions or no pensions at all: the minimum pension guarantee (MPG) and assistance pensions (pensiones asistenciales, PASIS). Those who had contributed for at least 20 years (including contributions to the old system) were entitled to an MPG—a fixed pension paid by the state when the individual balance was exhausted or when the annuity was below the minimum pension level. PASIS benefits were paid to those over age 65 with no pensions or other forms of income. Unlike regular pensions, minimum pensions were not indexed against inflation, but rather directly adjusted by government.

The Chilean system was amended further as it ripened toward its third decade. In March 2006, Michelle Bachelet, the elected president, commissioned an experts committee to draw a report. Two years later, the congress approved a comprehensive bill representing the most significant reorganization since the original 1980 reform. The 2008 reform replaced the MPG and the PASIS programs with a single plan securing a basic pension to all individuals in the 60 percent less affluent fraction of the population regardless of their contribution history. This new program has provided old-age and disability subsidies drawing on the state's general revenues. This New Solidarity Pillar (NSP) supplanting the old means-tested programs has bifurcated into an old-age Basic Solidarity Pension (PBS) paid to individuals with no contributions once they reach 65, and a Pension Solidarity Complement (APS) paid to individuals who contributed but would have received a pension below a certain threshold.

To resolve the problem of women's lower contribution density because of child rearing, the Chilean re-reform has granted a universal maternity voucher to all mothers (independent of their socioeconomic position) for each live-born child, equivalent to 18 months of contributions based on minimum salaries. The voucher is to be deposited on the date of the child's birth from which point it receives an annual rate of return, cashable when the woman turns 65, increasing the level of her pension. A further attempt to alleviate gender inequalities in the pension system levied a single charge on both men and women for disability and survival insurance. Owing to higher average salaries, men have paid higher premiums. The premiums paid by women have been deposited in individual accounts and invested, and that surplus gave invalid spouses of insured women rights to a pension that they had not possessed before. In case of conjugal separation, the funds accumulated during the marriage might be divided between the two spouses to a maximum of 50 percent each. The new Chilean law also withdrew minimum pensions from AFPs, eliminated the superintendence overseeing them, and erected instead a single unified pension superintendence to supervise both the public and the private systems.

To stimulate competition and reduce administrative costs, the Chilean law stipulated the following measures: (a) biannual affiliate bidding, so the AFP that offers the lowest commission wins the affiliation of the 200,000 people who enter the labor market annually (the reduced commission also has to be applied to old affiliates); (b) elimination of fixed-sum commissions that had regressive effects; and (c) authorizing banks to manage individual accounts in competition with AFPs. In order to goad workers and pensioners into participation, the Chilean law created a commission composed of five representatives, one from each of the following categories: workers, pensioners, AFP, insured remaining in the public system, and academia. Representatives have monitored performance in light of the reform goals and contrived strategies for members' education, diffusion of information, and communication.

Since the early 1980s, pension reform was only one element in a platform of all-encompassing socioeconomic reform that has managed to propel a productive economy with productive employment and a pension system that does not discourage productive activity by people when they get older.

Japan: A Double-Edged Sword—Super-Aged, Poorly Funded

The Japanese case represents a nearly unparalleled cause for concern due to the intersection of three demographic trends: a sharply declining birth rate, a baby boom generation approaching retirement, and steady longevity increases during the postwar era. Japan already has the oldest population in the world: The proportion of those older than age 65 was more than 25 percent in 2005 and will have risen to more than 40 percent by 2050 (information in this section from Takayama, 2001, 2009b; McLellan, 2004; Kang and Lee, 2009; Shinkawa, 2005; Huh and McLellan, 2006). The emergent intergenerational gap or disparity between lifetime contributions and benefits has begotten disequilibrium between contribution inflows and benefit outflows that has entailed elevated pension expenditures or reduced benefits.

From the outset, the Japanese system was institutionally fragmented, segregated, and overly diversified. The pension system imposed dual economic structures such as employment condition and income along with differences of regions and classes; plans have also varied in terms of financial stability and adequacy of benefits. This deep-seated fragmentation caused fiscal tightness in specific pension plans already in the early 1980s; as the population continued to age, it became paramount to overhaul the pension system.

Japan's occupationally divided pension system has its origin in public sector pensions. Naval, army, and civil service pension plans had been introduced at the initial stage of nation-state building between 1875 and 1894 and were unified in 1923. The government was reluctant to introduce a public pension for private sector employees, since the government shared with employers the concern that it would contribute to unionization across firm lines and reinforce unity. Retirement payments in lieu of pensions were commonplace amid large firms in the 1920s.

World War II witnessed the advent of the first pension in Japan as part of a social insurance system. The pension insurance law of 1942 was supposed to accommodate the costs of war and to raise productivity. In 1944 it was supplanted by the Employee Pension Insurance plan (EPI). When the economy got on the right track in the mid-1950s, welfare bureaucrats influenced by the Beveridge Report reactivated the EPI as an umbrella plan providing employees in the private sector with a major source of income in retirement. The Japan Employers' Association (JEA) insisted that employers be allowed to opt out of the earnings-related component of EPI. In 1966, the government yielded and launched the Employees' Pension Fund (EPF), which vested companies with the rights to administer and invest the income-related portion of the EPI transferred from the government to the company and referred to as “contract-out.” With this piece of legislation, Japan's pension system of commingled public and private benefits was born. The system of “pension for all” was completed by imposing membership in the National Pension Insurance (NPI) on those uncovered by employees' plans except housewives.

A grand-scale reform in 1985 introduced the Basic Pension (BP) to tie benefits to contribution. The BP constitutes the first tier of public pension that covers all citizens aged 20 and over; it entails 40 years of contributions to guarantee a full benefit. It is pay-as-you-go (PAYG) and subsidized substantially by tax revenues. All administrative costs and a third of benefits had been paid out of tax revenues in 2004; beginning with fiscal 2009 the government's share bloated, subsidizing half of the total cost of the flat-rate basic benefit but with no subsidy for the earnings-related part.

The BP has comprised three different types of insured. First-type insured have been members of the NPI covering farmers, the self employed, unemployed, and students. The second type has consisted of employees in both the public and private sectors aside from those who worked for companies with fewer than five employees. The third type has included spouses of second-type insured whose yearly earnings were lower than 1.3 million yen. First-type insured were expected to register themselves and contribute the fixed amount of premiums of their own accord. Over 20 percent of first-type insured have been exempted partially or entirely from contribution due to their low income. Even though the BP has been mandatory, approximately 600,000 potential first-type insured have not joined the program, and more than 3 million first-type insured have been in arrears. Second-type insured could not have failed to pay contributions because their contributions have been automatically deducted from their paychecks. They have paid 13.58 percent of monthly salaries and premiums (half paid by employers), but usually have been oblivious of how much they pay specifically to the BP, because their entire contributions have gone to their second-tier occupational plans from which a certain portion has been transferred to the BP. Third-type insured have been exempted from contributing. The major aim of the BP was to rescue the NPI from deficits. No efforts were made to coordinate various plans, which might have triggered resistance to the integration among employees.

The second tier has provided employees with moderate earnings-related pensions. It has subsumed the EPI that provided for private sector employees and Mutual Pension Plans (MPPs) for public and quasi-public servants (personnel in private schools and quasi-public corporations such as cooperatives in agriculture, forestry, and fishery). The central government and local governments have had their own independent plans. The second-tier plans were PAYG, but their administrative costs have been financed out of tax revenues, not unlike the BP. Employees were not allowed to claim exemption due to their low incomes, and no subsidies have been available. Firms were allowed to opt out of the EPI and forge their own funds (EPFs), but despite their private management, EPFs have been regulated and supervised by the government, since they have had a substitute for the EPI in addition to a purely company-specific pension. An EPF is an independent juridical entity, which a sponsor company/group of companies cannot dissolve of its own accord. In case investment returns have been lower than the officially required interest rate, sponsor companies have had to make up the difference.

In the wake of the decade-long recession of the 1990s, a pension bill restructured the social security system in order to fine-tune it in March 2000 in light of changing socioeconomic circumstances. Pension benefits were to be reduced by 20 percent by 2025; earnings-related benefits were to be reduced by 5 percent; flat-rate, basic benefits and earnings-related benefits that had once been tied to wages and updated every five years were to be tied to the consumer price index after the age of 65.

A defined-contribution (DC) plan was introduced from October 2001, followed by the Defined Benefit (DB) Occupational Pension Act that took effect on April 1, 2002. Contrary to expectations, DC plans in Japan have had at best a lukewarm response. Unlike similar endeavors around the world (including the United States), the DC act placed a very low cap on contributions, which precluded many companies from fully converting existing plans; it also stipulated against early withdrawals by employees before they reached the age of 60 and did not permit employee contributions, which kept assets small and locked in. The cash balance (CB), a hybrid plan, has taken precedence; such plans exemplify a shared burden or compromise typical of Japan's labor relations while combining what many consider the best features of DC and DB plans.

Global changes in international accounting standards in 2000 induced new accounting standards in the Japanese corporate pensions industry. When corporate pensions formally developed in Japan during the postwar period, tax rules did not allow companies to expense current contributions or prefunding. As a result, companies found it more convenient to simply not prefund pensions. International trends and overall globalization of accounting standards increased the pressure on Japan to adopt stringent accounting and disclosure requirements. Writing down a pension liability on the balance sheet and disclosing the full extent of pension assets and liabilities in the footnotes made transparent for the first time the status and questionable ability of corporate pensions to assume a stronger role in retirement savings. Dramatic demographic pressures put the PAYG system in a precarious position: how to provide benefits to an expanding base of retirees while the contribution base of current workers is shrinking. The public pension aspect enhances even further the pressure on corporate pensions to fulfill a greater role in the postretirement equation for Japan.

Japan's record keeping has been deficient. Due to human errors made by enrollees, their employers, and agencies, there have been around 50 million “floating” records of social security pensions, that is, records that have not been integrated into the unified pension numbers. Before January 1997, pension identification numbers were issued to each participant on a regional basis irrespective of pension program; upon migration to another region, company, or pension plan, and following marriage or divorce, these numbers altered. Since there was no adding-up requirement of covered years among different pension plans, many Japanese were likely to have two or more pension identification numbers before retirement. It was only in January 1997 that the unified pension identification number was introduced for all eligible persons in Japan. Pending unification, the Social Insurance Agency (SIA) that manages social insurance including pension found that there were some 300 million identification numbers for pensions, whereas eligible persons totaled around 100 million at that time. When the SIA sent those persons postcards entreating them to list all pension numbers they had in the past, more than 90 percent failed to comply, setting the stage for a national scandal that cost Abe's administration its reelection to the upper house in July 2007.

Japan has not managed to guarantee the solvency of its pension system yet; endless reform and the “floating” records scandal have eroded credibility legitimacy for the long term, and Japanese governments do not seem to be able to turn the tide. The dropout rate has increased from 35 percent in 1992 to around 54 percent by March 2007 of first-type persons (independent workers, atypical workers, the self employed, and persons with no occupation) who have dropped out from the basic level of protection because of exemption, delinquent accounts, or purposefully shunning the program. Those who have dropped or opted out will have received a smaller pension or none at all by the time they get old, so they are likely to rely on means-tested public assistance. More and more atypical employees as well as part timers are not covered under any earnings-related plan. In April 2007, 1.6 percent of people age 65 and older received no social security pension mainly due to insufficient years of contribution. Coverage is projected to shrink further alongside persistent drift out of social security programs due to eroded trust.

Poland: Farewell Redistribution, Hello Funded Defined Contribution

Pension systems in Western Europe have raised retirement age and gradually equalized the retirement age of men and women, whereas Central and Eastern European (CEE) countries and former Soviet republics were bent on more radical reforms (information in this section from Wiktorow, 2007; Fultz, 2004; Hausner, 2002; Chlon-Dominczak, 2009; Chlon-Dominczak and Strzelecki, 2010; Zajicek and others, 2007; Chlon, 2000; Jarrett, 2011; OECD, 2008). In the late 1990s, Hungary and Poland were the first CEE countries to radically restructure pensions, replacing part of their public PAYG pension plans with commercially managed individual savings accounts. Entrenched vested interests of certain occupational branch groups (mainly in mining and heavy industry), which had held sway in socialist systems, were the main problem inherited from that past that rendered those PAYG systems more susceptible to political pressures. Each country developed its own structural compromise to reduce labor market distortions while providing adequate income in a way that reflects its own unique social history, economic situation, and political preferences.

The pension system in Poland dates back to the interwar period. It became a full-fledged universal PAYG system in the 1950s. In 1989, the first postcommunist government opted for “shock therapy” in the form of rapid market reforms instead of gradual transition. Aside from privatizing and closing inefficient state-owned enterprises while eliminating their “easy” credit, reformers advocated free prices, drastic cuts in subsidies, wage controls, and a balanced budget. Such measures escalated unemployment and begot a short but deep recession and several years of hyperinflation. Pensions were not uprated until the 1980s when inflation rapidly eroded purchasing power. In 1982, legislation adjusted pensions in accord with the rising cost of living. Those who had already retired received index-linked benefits. The pension rate was calculated at 100 percent of a worker's base salary up to the first 3,000 zlotych, and then at 55 percent of the remaining amount. The minimum pension could not be lower than 90 percent of the lowest mean wage.

The 1991 Pension Re-evaluation Act had automatically uprated pensions in line with earnings and introduced earnings tests to alleviate labor market problems. Retirees could earn up to 60 percent of the average national wage without penalty. Earnings between 60 percent and 120 percent resulted in a loss of the benefit amount in excess of 60 percent; all benefits were retracted if a retiree earned 120 percent or more. The 1991 act also increased gradually the number of years used to establish the salary base for calculating pensions. Since the average wage tends to be higher if fewer years can be selected, this provision decreased benefit levels across cohorts of retirees. Indexation was not favorable for pensioners in the early 1990s because prices increased faster than wages. However, real wages grew after 1993, and correspondingly pension spending spiked up. After the subsidy to the social insurance fund from general revenues reached 5.7 percent of GDP, the government cut down on pensionable income from 100 percent to 91 percent of actual pay. From 1995 onwards benefits were indexed to prices.

Regardless of government efforts, some groups were able to preserve or even add new occupational privileges. For instance, in 1995 the average monthly pension for both the military and the police was more than double the pension for the general population ($347 and $291 respectively, compared to $139). In fact, in 1995 the average salary was 32 percent lower than the average military pension. Spending spiked due to early retirement of too many, as people subject to group layoffs could retire regardless of age if they fulfilled the contribution criterion (40 years for men and 35 years for women). Workers in reorganized sectors were also offered early retirement. Almost a quarter of the workforce was covered by some kind of early retirement privilege, which seemed a convenient way to reduce labor market pressures. Access to disability benefits was relatively easy. Despite policies to countervail the labor market mismatch and facilitate restructuring following transition, in the second half of the 1990s unemployment spiked, worsening labor market conditions.

A watershed moment still lay ahead of the Polish pension system after the Swedish model had inspired a reform set in operation in 1999. The mandatory pension system was based on two components: national and funded defined contribution (NDC and FDC, respectively). Persons born before 1948 were not covered by the reform and remained in the old defined-benefit pension plan, and their old-age pension contributions (19.52 percent of their wage) were transferred to the Social Insurance Fund (ZUS). People born between January 1, 1949, and December 31, 1968, had an option to split their contributions between NDC and FDC accounts or have only an NDC account (that is, their enrollment with second capital-pillar pension funds was voluntary). In that case, all pension rights that had been accrued prior to the enactment of the reform were recalculated into the initial capital and credited to the NDC account, but the decision whether to split their contributions between the two pillars or remain in the public PAYG system alone was irrevocable. The new NDC system whereby benefits reflected each individual's own contributions in a more nearly linear way revamped the public pension system. Future pensioners are supposed to receive benefits they have “paid for,” so redistribution toward low-income earners is eliminated, and benefits diminish automatically in response to growing average life expectancy.

Approximately 70 percent of those who were allowed to choose opted for membership in the pension funds under the second capital pillar. Contributions of people who had remained in the first pillar (19.52 percent) were transferred to the Social Insurance Fund. People who decided to become members of pension funds under the second pillar transferred 12.2 percent of their wages to the Social Insurance Fund, and another 7.3 percent went to a selected open-ended pension fund. People born after 1969 have been mandatorily covered by the double-pillar pension. Only the state's budget could bridge ZUS deficits that stemmed from shifting contributions to the second pillar. Pensions have been calculated based on accumulated assets in both accounts and unisex life expectancy at retirement age. The pension formula was not progressive; namely, retirees could expect to receive the same proportion of their earnings at retirement, regardless of their wage.

The 1999 reform has cast a pension system with preponderant gender inequalities. Differences in pension levels derive not only from higher life expectancy for females but also from disparate wage levels, length of work, and retirement age (60 for women versus 65 for men). Women subsequently face higher risk of low pension benefits. The minimum old-age pension entailed retiring under both pillars simultaneously. Only persons who had paid contributions for at least 20 years for women and 25 years for men were eligible for the minimum pension. In case the total old-age pension payable under the first and second pillars was lower than the statutory minimum, it was raised to the minimum old-age pension financed from the state budget.

The replacement rate for covered workers diminished by 37 percent according to OECD calculations, amounting to around 50 percent in total with about half of that from the first pillar and half from the defined-contribution pension funds (using historical average data on investment returns). The government had capped contributions at 2.5 times the average wage, which lowered government revenues in the short term by an estimated 0.4 percent of GDP. Defined contributions tied benefits to earnings rather than occupation and dwindled previous occupational inequities embedded in the communist system in an attempt to reduce early retirement and prolong working life. Success has been only halfway due to a transition rule that enabled persons who had qualified for all retirement rights before the end of 2008 to retire under the old system.

Since the end of the 1990s, the gap between the employment of older workers in Poland and the EU countries has been widening because labor market participation of persons aged 50 and older remained one of the lowest in the EU. People covered by the new pension system have only started to retire in 2009. Recent legislation has precluded most early retirement options. “Bridging pensions” had been granted in some instances based on a medically verified list, but in 2009 policy makers managed to exclude occupations and shrink bridge pensions. Pensioners were dismayed by the new layer of management and administration interposed between the pensioner and his/her savings in the “open pension funds.” The government issued bonds to finance a deficit, a large part of which was attributable to transfers to the OFEs (the private open-ended pension funds). Despite the radical cut in replacement rates since 1999, the system was still not fully financed by contributions. The financial imbalance was supposed to be resolved by devoting the proceeds of privatization to the pension system, but sales had generated fewer revenues than the government's transfers to the OFEs. The government could no longer afford considerable transfers to the OFEs in the context where it had to consolidate public finances to avoid government debt reaching the 55 percent of GDP threshold that required automatic cutbacks. When the burden on public finances approximated the ceilings for government debt of 55 and 60 percent of GDP stipulated by the law and the constitution respectively, the government was forced to take action.

The epilogue to the 1999 reform has arrived rather hastily, when the parliament had quickly passed a special bill that took effect on May 1, 2011. The reform package retrenched contributions to the OFEs from 7.3 percent to 2.3 percent of earnings (with a recovery to 3.5 percent from 2013 to 2017), with that amount diverted to the Social Insurance Fund (ZUS). Tax cuts have been introduced for optional third-pillar retirement savings. Before the partial policy reversal, official estimates pointed to a cumulative cost of 94 percent of GDP by 2060. The government's proposed shift of contributions from the OFEs to the first pillar was supposed to reduce this alarming figure from 94 percent to 44 percent and cut the value of pensions, albeit the value of the public component was projected to mount. Gross replacement rates in relation to final salary would decrease for cohorts born more recently than for today's retirees due to demographic aging. The private pension market did not stand still: At the outset of the reform, there were 21 pension funds. By 2007 their number shrank to 15, and further mergers and acquisitions are pending.

Sweden: Transforming Corporatism and Remodeling Labor Market Policies

From 1890 to 1920, labor and social movements pursued democratization as a joint project with liberal elites who had dissented from an antidemocratic conservative regime. Those movements had erected municipal employment offices to establish a strong national presence that forged strong links between workplaces, local organizations, cooperatives, and neighborhood organizations (information in this section from Calmfors and others, 2002, 2004; Vandenberg and Hundt, 2012; Forslund and Krueger, 1997, 2010; Lindvall and Sebring, 2005; Anthonsen and others, 2011). During the interwar depression the government organized relief works and special youth jobs. The foundations of modern labor market policy were laid in 1948, when the National Labor Market Board was instituted.

Gösta Rehn and Rudolf Meidner, a couple of trade union economists, laid out the principles of the Swedish labor market policy in the late 1940s and the early 1950s, devising labor market retraining and other measures to enhance mobility so that unemployed in low-productivity sectors could migrate to high-productivity sectors, relieving labor shortages there. An active labor market policy was also presumed to be conducive to low inflation, full employment, and wage compression. Postwar concerns regarding labor mobility shifted to all types of unemployment between 1960 and 1990. Many are struck by Sweden's breadth and generosity of labor market programs designed to limit the adverse effects of unemployment and expand employment. These programs include extensive job training, public sector relief work, recruitment subsidies, youth programs, mobility bonuses, and unemployment benefits. Unemployment insurance (UI) benefits are said to be passive labor market policies, whereas labor market programs are considered active labor market policies. This distinction seems to be based on a notion that UI benefits are paid as a compensation for not working. For Sweden, this is a misleading distinction owing to a set of rules designed to ascertain that the unemployed person is available for work and is actively searching for a job in different ways. The Public Employment Service (PES) is the spider in this web.

Active labor market policies refer to supply-side measures intended to assist the unemployed to find a paid job. From the late 1950s on, spending on active labor market policy such as labor market training has been exceptionally high in Sweden. Unlike many other countries that have relied on cash transfers to the unemployed, Sweden has always pursued active policies designed to eliminate open unemployment and reintegrate the unemployed. Lenient labor market policies reflect a long-lasting corporatist tradition in Sweden. When it comes to state and society, corporatism conceives of a state that actively intervenes to recognize, license, or create particular groups with organized capacity to represent given interests so long as they select moderate leaders to articulate their demands. In corporatist systems, trade unions and employer organizations are invited to develop and implement public policy in tandem with the state and other interest organizations. For the past 60 years, Sweden has occupied an antipode of a spectrum of corporatism.

Up to 1974, unemployment compensation had been provided only by so-called certified UI funds run by trade unions at the industry level but mainly tax financed; a supplementary compensation system (kontant arbets-marknadsstod, or KAS) was forged in 1974, designed mostly for new entrants to the labor market who were not usually members of any UI fund. In 1990, coverage was slightly less than 80 percent of the labor force. UI fund members were entitled to compensation for 300 days (450 days for workers over age 55), whereas cash benefit assistance used to run for 150 days (300 days for those over age 55, but 450 days for those over age 60). Compensation rates in UI funds had been fixed within limits by the government; maximum levels had been set at 90 percent of the recipient's normal income prior to unemployment; in July 1993 it was cut back to 80 percent. When funds were run by trade unions at the industry level, coverage roughly coincided with wage-bargaining units; state grants to the UI funds had been designed so that the marginal cost of extra unemployment among fund members was zero. A number of criteria, many of which were common to the UI funds and KAS, had to be met in order for a person to be entitled to compensation. The two most prominent conditions were that recipients actively search for a job at a public employment office, and that an offer of “suitable” work had to be accepted. After unemployment benefits had run out, individuals were eligible for social security that offered significantly less generous compensation. In order to receive compensation from a UI fund, a claimant must have paid membership fees to the UI fund for at least 12 months and must have been working for at least 75 days distributed over at least four months during the 12 months preceding the last unemployment spell. Participation in relief work as well as in labor market retraining programs counted as work in that respect.

In the early 1990s, Sweden entered its deepest recession in the postwar period. Between 1990 and 1993, the Swedish unemployment rate spiked from 1.5 percent to 8 percent, or from just under 3 percent to 13 percent when measured by total unemployment (the sum of open unemployment and program participation). Active labor market policies (ALMPs) became the main short-run policy instrument to counteract the rise in open unemployment endeavoring to provide income support for the unemployed. Although compensation could be granted for no more than 14 months, eligibility could be renewed through participation in ALMPs that were systematically used to this end. Many in Sweden and elsewhere questioned whether ALMPs could actually account for earlier low levels of unemployment. Longitudinal research conducted alongside international comparisons found very little evidence that ALMPs played a key role in keeping Swedish unemployment low. Relief work, for example, crowded out regular jobs in some sectors, so that the net effect on unemployment was smaller than the number of participants.

Existing labor market programs were modified while new ones were set up pursuant to newly accrued wisdom. Between the late 1980s and the early 2000s, participation in training programs enabled participants to renew their eligibility for UI benefits. This system was nullified for all labor market programs in 2000 in connection with a reform in the UI system and the introduction of the activity guarantee. Trainee replacement programs (1991–1997) subsidized employers who trained or hired employees from the Public Employment Service (PES). Self-employment grants were yet another type of subsidized employment. Contingent upon preliminary scrutiny by employment offices that also arranged entrepreneurial training, these grants were given to unemployed persons to start their own businesses; they consisted of unemployment benefits for up to six months.

In the 1990s, relief jobs were used only to a small extent prior to their final abrogation in 1998; they were replaced by so-called work experience programs, where participants were supposed to be placed in jobs that would otherwise not have occurred in order to avoid crowding-out effects. Recruitment subsidies had been introduced in 1981 only to be superseded in 1998 by employment subsidies targeted mainly at the long-term unemployed.

Resource jobs were introduced in 1997. This program was a subsidy to employers for temporarily hiring unemployed workers (six months with an option to prolong it by three months). Participants were supposed to both work and undergo training. The wage rate was capped at 90 percent of participants' previous income. The activity guarantee has targeted persons at risk of becoming long-term registers at the PES, or those who have exhausted their UI benefits. The activity guarantee is a framework whereby participants receive the equivalent of UI benefits, but in return are supposed to search for a job, to participate full time in a regular labor market program, or to be engaged in some training program. There are only three ways to leave the guarantee: by finding a regular job lasting at least six months, by participating in regular education, or by leaving the labor force. Changes have been made to the rules of the UI system. Participation in labor market programs no longer renews UI benefits—the only way to renew eligibility is through an ordinary job. If an unemployed worker failed to find a job within the 14 months of UI benefits, a caseworker at the PES office may decide to either transfer the worker to the activity guarantee or award him or her another period (14 months) of UI benefits. Those who did not manage to find a job after a second period of UI benefits are transferred to the activity guarantee or else lose all income support, possibly apart from means-tested social assistance.

Administrative corporatism entails a powerful bureaucracy to steer labor market policy and social partnerships. The most dramatic institutional changes occurred in the early 1990s, when the Confederation of Swedish Employers decided to withdraw from the governing boards of almost all government agencies including the National Labor Market Board—a highly independent agency that had been vested with both policy making and implementation. Following the withdrawal of employers, labor unions were excluded as well by the government, since their participation was not seen as legitimate after their counterparts had left. Representatives of unions and employers have remained in some advisory functions, but the individuals in question do not represent the organizations formally but rather have personal mandates. From the outset of the 1990s recession, employers have advocated liberal policies that have been at odds with the idea of large government programs for workers' retraining and relocation. Labor market policies have been increasingly marked by fundamental conflicts between employers and unions and between left and right because both government and unions give priority to redistribution over economic growth.

The social democrats were in government from 1982 to 1991 and from 1994 to 2006, so they have held office during most of the past 25 years. An outside option per se has not been a strong reason to bypass corporatism. Provided that the unions expect to achieve more favorable outcomes through direct cooperation with their social democratic allies, they venture to do so rather than bargain within corporatist institutions. The Swedish Employers' Confederation (Svenska Arbetsgivareföreningen, or SAF) and its successor organization, the Confederation of Swedish Enterprise (Svenskt Näringsliv) had reoriented themselves strategically and decided to engage in lobbying rather than sustain corporatist arrangements, therefore breaking down Swedish corporatism.

Formerly, the main function of ALMPs was to cushion the blow of unemployment for those who become unemployed, not to speed reemployment or increase overall employment. From 1941 to 1991, Sweden maintained full employment owing to social democratic ideology and as a matter of long-term institutional buildup embedded in a political culture of independence from neoliberal directives. Desperate times have entailed bold measures, since the lenient system was highly susceptible to exploitation and misapplication. Active labor market policies mitigate the moral hazard of generous unemployment insurance: By making compensation conditional on accepting regular job or placement offers from employment offices, ALMPs test determination to work. Devising policies to raise employment stays on the front burner at a time when chronic unemployment looms large in Western Europe. Many scholars exalt Sweden's commitment to support the unemployed and to reform the mix of ALMPs when they do not seem to be as efficient as possible, while undertaking to come up with an optimal formula.

Comparing Health Care, Education, and Social Services across Nations

This chapter dealt with three of the five giants identified by Beveridge as standing in the way of reconstruction: want (social security), ignorance (education), and disease (health) (Timmins, 1996, p. 12).

Over the past three decades, aging population next to a boosting health industry composed of burgeoning new technologies, drugs, treatments, and ancillaries put a lot of financial strain on public health systems. More and more, governments need to offset countervailing tendencies, settle equity versus choice, enable universal access but deter free riding, and provide high-quality treatment but still contain costs. While measures taken by governments are contingent upon historical policy legacies and institutional settings, a convergence is evident as very similar tactics are devised.

Singapore seems to have it all: The 3Ms system has instilled some cost consciousness in citizens who are presented with a bill for services rendered. There are no free riders, as the government picks up the tab only after means-testing the applicant and his family, so Medifund is activated only as a last resort. Singapore's triad of healthy population, highly satisfied patients, and a low public expenditure on health as a percentage of GDP is envied by many developed countries. Yet customer satisfaction as well as the general health of a given population hinges upon many factors; it can be ascribed neither to relative nor to absolute performance of health systems (Barr, 2001; Pauly, 2001).

To maintain sustainable social insurance, the Netherlands also came to grips with the health care system, undertaking to privatize it completely. By adopting Enthoven's blueprint, the Netherlands has become the closest it can get to his theoretical model of “managed competition.” Thwarting “cream skimming” and risk selection as suggested by Enthoven, the Netherlands had purportedly removed the most prevalent obstacles that may obstruct free choice and universal access. It took more than 20 years to fully realize the reform, and it is still a work in progress. Both Singapore and the Netherlands have taken very similar measures to assure quality and responsiveness: The Dutch Health Care Inspectorate (IGZ), which is an independent agency within the Ministry of Health, Welfare, and Sport, conducts regular inspections while monitoring complaints and performance. Singapore has embarked on a rigorous benchmarking regime for the past three decades, instilling quality assurance measures alongside innovative management techniques. Those efforts culminated in 2001, when five former national agencies were amalgamated to create an all-encompassing statutory board to license, accredit, survey, and audit for the sake of quality (Lim, 2004a).

Australia and Argentina have both struggled with the free rider problem to which they were both highly susceptible due to their unique mix of private and public provisions. The problem is further obfuscated by the inherent strain in federal systems where responsibilities as to who funds, who decides, and who renders services are oftentimes blurred. Australia has managed to reduce incentives to free ride by giving tax concessions that induce low-income earners to take up private insurance while imposing financial penalties on high-income groups who would otherwise rather not buy private insurance. Many have subsequently opted for private insurance, but great dissatisfaction with private insurance persists due to large out-of-pocket costs (Healy et al., 2006).

Argentina is beset with economic and social problems; it is hardly surprising that its health system is also in dire straits. Argentina struggles with a fragmented social insurance system, too many free riders (institutions and individuals alike), and a very high percentage of uninsured having to rely on an underfunded, low-quality public system. Argentina has failed to resolve most issues; Remediar is the silver lining of the otherwise very dark cloud. Launched in 2002, this program provides free medical consultation and dispenses drugs free of charge, catering to the most vulnerable, indigent members of society (Cavagnero and Bilger, 2010). Argentina has failed to introduce a mandatory list of medical rights or entitlements, to consolidate the social health insurance system, and to reduce free riding. From all the countries vetted in this section, Argentina is the only one that neither forged an institutional rationing and allocating mechanism nor launched any performance indicators and/or quality assurance measures. Those two mechanisms interact, making it possible to ration the most evidence-based, high-quality treatments, but both are absent in the wanting Argentinian system.

Although the systems under consideration are structurally different owing to historical contingencies and institutional settings, they tackle similar difficulties and come up with very similar solutions: privatization, some degree of competition to foster consumers' choice, heavy inspection and regulation regimes, quality assurance measures, and disseminating information to beget an informed patient in light of the problem of information asymmetry.

The essence of modern economic growth theory is the notion that societies need a critical mass of educated people to provide the kind of seedbed in which new ideas and technical innovations prosper. Left to themselves, parents might not fully appreciate the capacity of their child or wish to invest optimally in his or her education, never having enjoyed the fruits of a good education themselves. For some, education may be a mere vehicle for ostentatious consumption; for others, a way to an egalitarian utopia; for yet others, a means of instilling respect for tradition (Glennerster, 1998, p. 28). As capital and ideas now flow across national boundaries with the speed of the electromagnetic spectrum, international interdependence has replaced national autonomy as the prevailing global condition, and countries are compelled to reexamine both their foreign and domestic policies. One consequence is that countries are increasingly occupied with similar policy issues and select from similar sets of ideas in seeking solutions. Intensified interdependence has its costs, since countries are presently tugged by dual forces of economic cooperation and commercial competition. For industrialized nations, international economic competition is currently the most intense force propelling national policy; in no other domain is it more evident than in education.

The central economic problem facing policy makers in industrialized nations is how to increase productivity. Economies are now dominated by high-cost money, extensive internal and external competition, rapidly shifting markets, and new technologies affecting products in terms of their development, production, and dissemination (Guthrie and Pierce, 1990). Because of these changes, human capital is becoming a critical economic resource rather than a dispensable factor of production. Employment growth, most critical to a nation's economic standing, is likely to occur not in basic manufacturing but in sectors such as advanced manufacturing, information, high technology, and specialized services. This entails a highly skilled, adaptable, and possibly creative workforce. At a time when many new jobs require higher communication, computing, and reasoning skills, national assessments of students indicate that entrants to the workforce frequently lack such faculties. Nations and businesses are interested in reforming education in order to furnish students with education, skills, qualifications, and motivation before they enter the workforce. Although individuals gain from being prepared for skilled work in the form of higher earnings in later life, benefits are not just reaped by individuals but spill over to the rest of the economy, since firms are attracted to areas replete with well-trained people (Glennerster, 1998, p. 28; Guthrie and Pierce, 1990).

Britain and the United States demonstrate how international competitiveness bears on industrialized nations, impelling them to espouse new policies to remain competitive, as much of the blame for both America's and Britain's deteriorating economic positions has been placed on each nation's respective education system. The welfare state as well as the social democratic political consensus reached its apotheosis in the 1960s, first in the United States and later in Great Britain. The oil shocks and recessions of the 1970s had brought an end to those welfare states' heydays; henceforth, the right-wing strand overrode others, as neoconservatives denounced the welfare state, aiming markedly at its three pillars of health, education, and social security (David, 1991; Elliott and MacLennan, 1994).

Similar trends have been evident in both Britain and the United States where reforms have emanated from the same New Right seedbed: Attempts have been made to coerce higher performance through testing and evaluation regimes while subjecting students' results to the glare of public information and citizen scrutiny. The negative end of the rigor spectrum sanctions punitive measures for poorly performing schools.

In the same New Right vein, a greater measure of customer choice has been injected into schooling. If parents and pupils can exercise options to attend a wider range of schools, so the argument goes, then educators will strive more forcefully to satisfy client expectations and enhance instructional productivity in the process. Britain has laid more emphasis on the former, whereas the United States has opted mainly for the latter. In both countries it is mostly the “haves” who can exercise choice: In Britain, the offspring of well-to-do parents enjoy either good state schools in their residential catchment area, or their parents send then to a good private school; for them it is a win-win situation. In Wisconsin, the more knowledgeable, thrifty parents who are au courant and know when and how to pursue vouchers for their children get the upper hand. Outcomes have been rather dubious in any event. Many low-quality, ad hoc private schools have been erected in Wisconsin, endeavoring to reap the benefits of publicly funded vouchers, utterly subverting the tenet that vouchers and choice cater to the deprived by providing them with high-quality education otherwise beyond their reach.

In Britain, the most apparent outcome of the last three decades of reforms has been a hostile, cowed, and abashed teaching profession, pitted against parents, officials, and administrators alike. There seems to be no persistent improvement of performance compared to European competitors, casting a shadow upon the British economy. The problem of unfavorable life chances or familial circumstances, poverty, and deprivation prove to be a classic wicked problem: It is persistent, intractable, and insusceptible to simple solutions or interventions. In Britain, from the first term of Thatcher's conservative government, strong centralization has taken place in almost all areas of government's remit including education. The advent of a New Labor government in 1997 did not change this tendency: Centralization has rather carried on, all the more forcefully.

The French education system has undergone changes exactly in the opposite direction, devolving funding to regional authorities and delegating managerial powers to university presidents. As in Britain, in the 1980s more accountability was infused into the system, so resources were no longer allocated through arbitrary administrative decisions, but rather needed to be justified, elaborated, and rationalized. France and Britain were antipodes for many years in terms of their education system: Britain had a very dispersed and decentralized system with every LEA functioning de facto as a local ministry, whereas France was highly centralized—thence the Ministry of Education had been dubbed Le Central, or “the center.” During the past three decades, those systems have become quite congruent as they met halfway on the centralization-decentralization continuum (Deer, 2002; De Meulemeester, 2003) and embarked on performance measurements in one form or another. Investment in education is commendable, but so long as governments are the big and main spenders, they exert their will in ways that may stand in the way of cherished academic freedom; hinging upon the public purse is not without its flaws, since he who pays the piper calls the tune (Deer, 2002).

The People's Republic of China has also decentralized its higher education system for the last 30 years, venturing to retain its protracted miraculous growth while maintaining its competitive edge. China faces a tall order, since it endeavors to tiptoe the fine line between checked decentralization that gives rise to Western-style enterprise, innovation, and thrift versus one that may give way to rebellious clamor in a Tiananmen Square-like manner. Like many other areas of economic activity, higher education is no longer the government's stronghold allocating places, jobs, and positions: Free-market mechanisms hold sway nowadays. The question to which extent to devolve and encourage free thinking and initiative versus how much power to hold onto will continue to be the elephant in the room. In terms of embracing NPM-style appraisal mechanisms to evaluate its higher education institutes, China has yielded to this (mainly Western) epoch's zeitgeist, upholding regular, strict, and systematic inspections.

Social security issues at large and specifically pensions have been high on the political agenda in almost every developed country for the past 30 years. The post–World War II progenitors of social security programs boosted existing programs and contrived new ones based upon high fertility rates, full employment, and a male breadwinner. Those assumptions were barely accurate back then, but for the last four decades, things have changed drastically. Even the most developed economies in times of growth and prosperity endure some degree of unemployment and are unable to guarantee full employment for their inhabitants; this rudimentary Beveridgian presumption has been long ago forgone. All around the world, populations age as longevity increases and fertility rates wane. Almost all pension programs were pay-as-you-go plans, meaning that the current working population paid for current retirees. As those systems had matured, they became insolvent, bearing more and more on governments' budgets. Something needed to be done to address this demographic time bomb (Bonoli and Shinkawa, 2005).

Coinciding with the heydays of NPM, most countries opted for at least some degree of privatization. Pensions ceased to be a promise made by states to their future retirees, but rather a personal duty of workers to put aside some of their income for their future retirement, fending for themselves by securing their prospective revenues. The state remained in a residual position: regulating, enabling, and administering a safety net—a subsistence-level pension for indigent citizens who for one reason or another did not accrue sufficient resources to provide for their old age. This transmutation from defined benefits (DB) to defined contributions (DC) shifts the risk from governments to citizens and employers. Upon retirement, extrapolating life expectancy (via tables, which consider gender, family risk factors, and so on) and hinging upon the accumulated sum, a monthly pension will be rendered. Recent developments shifted the burden further from employers to individuals alone in order to maintain low labor costs in a globalized world. Such is the case in Chile and Japan, where employers' contributions are no longer allowed, therefore ruling out the former risk sharing between employers and employees. The most salient task facing governments is to propel a cultural shift and make people comprehend that assuring their livelihood in their old age is their own responsibility rather than their governments'. Taking in this notion entails renouncing some current indulgence for the sake of future comfort. It is rather challenging to persuade people to forsake some of their conspicuous consumption when who you are is what you buy.

Current generations face a double burden, having to finance their parents' generation's pensions as well as their own. This phenomenon has been paramount in Japan, which faces the highest pressures in the developed world in terms of its overriding demographic time bomb combining extreme longevity with dwindling fertility rates. Moreover, pensions have been raised too much following the post–World War II economic boom. After four and a half decades, the party ended, so current spending on pensions no longer tallies with Japan's economic standing.

Another caveat regarding pension privatizations stems from the complexity of this product. The problem of asymmetric information between providers and clients is prominent because an average client for the most part is unable to fully understand figures, possibilities, and ramifications. By the same token, it is almost impossible to gauge even roughly the prospective monthly allowance a private plan will eventually yield, since funds are injected into the economy and invested locally and abroad, and hence subject to market fluctuations.

When privatizing, governments undertake to ensure a degree of choice and competition though a curbed one, so that clients can compare products and be well informed rather than swindled and wrongfully lured to join or to switch between plans. Chile exemplifies the need to regulate and attune the system consecutively: The Chilean system had been too dispersed and segmented toward the end of the 1970s, consisting of 35 plans. Following privatization, only 12 AFPs, private for-profit corporations, composed the system; toward the end of the 1990s, their number almost doubled (21) after new competitors entered the market. Later on, that number dropped to six—an upshot of closures and mergers. Adamant to perpetuate a certain degree of competition, the government oftentimes intervenes in order to fine-tune the system. What is quite conspicuous in Chile, however, is the radical introduction of a private social security system earlier than most countries in the world; social security systems in general and chiefly pensions prove to be difficult to entrench or restructure, since it is a sensitive issue concerning all electorates with their potential political clout. No wonder then, that Chile's undemocratic, authoritarian 1980s government inaugurated a revolutionary reform, whereas most democratic governments need to make do with piecemeal, wavering, mainly incremental alterations.

In Poland and in other CEE countries restructuring of pensions coincided with restructuring of the entire economy during transition to a full-fledged capitalist market economy. In 1998, a second privatized pillar was introduced alongside the national, public one administered by the Polish Social Insurance Fund (ZUS). As private pension funds were injected into the national economy, an adverse flow of revenues from privatization proceedings to pension funds occurred. Apart from transaction costs that needed to be considered, a new layer of management and administration was interposed between pensioners and their savings. Approaching the government's deficit ceiling, privatization was moderately reversed in 2011, having some of the private funds diverted back to the public pension regime. Estonia, Hungary, Slovakia, Latvia, Lithuania, and Romania have recently partially or completely reversed their earlier moves from pay-as-you-go (PAYG) systems with relatively high contribution rates toward compulsory defined-contribution plans. As Poland had to tackle complications epitomizing many CEE countries, it soon followed suit (Jarrett, 2011).

NPM-style measures have different impacts on various sectors of many countries. In Sweden, NPM has been most apparent in active labor market policies (ALMPs). One of the most salient NPM tenets stresses the need of public institutions to be efficient and effective; namely, to do more with less. This mandate resonates in the changes Sweden has induced in its ALMPs; whereas formerly lenient ALMPs were mainly a means of muddying the real extent of unemployment, they have been transformed to straightforward measures endeavoring to reintegrate unemployed into the labor market. Seeking major entrenchment or restructuring of the Swedish welfare state, still one of the more expanded (and some may argue generous) of them all, ALMPs spring to mind as an example of conspicuous policy shift. This shift is imputable inter alia to declining corporatism. Reform within existing social programs and labor market programs—sometimes necessary in mature welfare states—requires coordination across policy areas where interest organizations play a pivotal role. Corporatism according to some accounts has also been declining in the Netherlands where corporatist politics in the 1980s and 1990s shaped the still germane wage agreements. Conversely, corporatism in Denmark has been said to be on the rise nowadays after a protracted ebbing from the 1970s on (Anthonsen et al., 2011; Lindvall and Sebring, 2005).

In all of the cases portrayed above it has been demonstrated how NPM-style reforms have dramatically altered the contours of the classic welfare state, reducing it to a regulator and an enabler rather than a service provider. The modern welfare state is no longer a “nanny.” Whether to dub those processes as representing “retrenchment” or “restructuring” is a matter of taste and a rather obsolete, futile debate. It remains to be seen how states and governments will redefine their role in the future; their most challenging undertaking may be to revamp relationships with citizens, demarcating mutual responsibilities and obligations anew. As government-citizenry relationships are an ever-changing landscape, this issue is by no means a new one. In his book Politics in England, Richard Rose quotes a commentator questioning his government's ability to deal with contemporary complex problems at the very outset of the previous century: “Regarding our government, we have a few tacit understandings; the problem is that those understandings are not always understood” (Sidney Lowe, 1904, cited in Rose, 1989).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.137.7