CHAPTER NINE

TRADITIONAL GOVERNMENT ACTIVITIES I

National Defense, Police, Law and Order

This chapter encapsulates core tasks that governments hold sway over: protection of the jurisdiction from extraneous threats (military) and inside turbulence (police) that, alongside the system of law and judiciary, constitute and epitomize the gist of a nation's sovereignty. Governments since antiquity have been contingent upon these traditional services. Looking at various structures of the construction of power and law systems as well as cultural diversity and reliance on values and tradition, we will illustrate the administrative challenges facing states across the globe and suggest some elements of comparison. Attention will be given to the general development of military, policing, and judiciary services (including prisons) in a variety of countries from all continents, giving special attention to functionality of structures in a given culture and the administrative uniqueness that has been set up to allow the continuation of life in a secured environment. We also will discuss two specific and very contemporary global challenges: emergency management (cast to handle natural disasters as well as those inflicted by man) and security and migration policies.

Atrocities of Man and Nature: National Defense and Emergency Management

Fighting between (and sometimes within) groups has been part of human history since time immemorial. The military classes, as Mosca called them (1939, p. 222), have for most of history been at the apex of stratified society. Indeed, the commander of these classes was often also the ruler, leading ad hoc or mercenary armies. What changed in modern times, especially since the nineteenth century, is that states instituted standing armies (at least in the Western world) that are subject to civil authority (Mosca, 1939, p. 229). This had been unthinkable in premodern times, because armies had the capability of overthrowing political regimes via a coup d'état, which has happened in modern times as well; for instance, in nineteenth-century Latin America, in twentieth-century Europe (Greece, Spain, Portugal), and in twenty-first-century Egypt. The standing is also a professional army, a fulltime occupation requiring specialized training (Huntington, 1957, p. 19). The professional military emerged as the primary instrument to defend the territory against foreign aggression, but starting in the centuries before the Common Era, they were also used in humanitarian relief. Initially, that relief took the form of interim military administration of conquered lands, assuring that the population would have a regular food supply (Cuny, 1989). This role of the military expanded enormously in the second part of the twentieth century, with military not only being deployed to aid in domestic disasters but also to serve in international relief efforts (Wiharta and others, 2008). The cases in this section address challenges found everywhere: professionalization of military, the shift from conscripted to voluntary armies, and the use of military personnel in national and international disaster relief.

France: Europeanization, Professionalization, and the End of Conscription

Conscription had been the basis of the armed forces since the levée en masse of 1793 and the 1798 loi Jourdan that established the principle of conscription (unless stated otherwise, this section is based on Irondellea, 2003a; Lecomte, 2006; Bloch, 2000; Cole, 2010). Increasingly, national defense forces became widespread in Europe. After a century of de facto universal and compulsory military service, conscription was finally consolidated by law in 1905.

Political and military actors and the public at large endorsed and legitimated service in arms that was premised upon a personal obligation, universal in reach, and with a duration equal for all. After France had gained an independent nuclear arsenal, the Force de Frappe, the army was reduced to a secondary role in the nation's defense but remained a symbol of French patriotism and a locus of national aspirations to the rank of the world's “third military power.” Still traumatized by the collapse in civil-military relations during the Algerian rebellion, the presence of the conscripts was greatly valued as a “sacred current of air” and a means to maintain ties with the nation.

The end of the war (1954–1962) evoked contemplations regarding military manpower. Rethinking was also spurred by the détente that prevailed in the sphere of international relations. France aligned itself with other Western democracies that had downgraded their militaries: Military expenditure as a share of the French national budget fell from 28.5 to 14.8 percent between 1960 and 1992. Apart from nonsupportive military and financial trends, social changes epitomizing industrial democracies such as a rise in individualism and a corresponding decline in traditional values and attitudes gnawed at the legitimacy of the system. The decline of conscription following the end of the Algerian war (1954–1962) encouraged professionalization and increased the numbers of career professionals within the total volume of the armed forces.

From the 1970s onwards, successive governments opted for a more or less artificial reduction of military manpower. As both the duration of service and the pool of potential recruits eligible for national service had been circumscribed, the previous universal and compulsory draft was progressively transformed into a short, selective, and differentiated service. By the end of the 1980s, the French military became a pseudoconscription body with barely 50 percent conscripts. The system's dysfunction notwithstanding, and although deemed desirable and utterly necessary, professionalization was rejected up to the mid-1990s; it was also regarded as too expensive and financially risky. Drawing from the experience of the American military, doubts were cast whether professionalization can give rise to military manpower of sufficient quality and quantity.

In the early 1990s, France retained the defense posture laid down by President Charles de Gaulle 30 years before. Nevertheless, the volume of the armed forces decreased from 1,153,000 men in 1957 to 520,000 in 1992. Built around a principle of national strategic autonomy, these policies served French interests in the Cold War when, as an associate of NATO remaining outside the unified military structure, France enjoyed the security privileges of NATO membership without sacrificing any of its jealously guarded sovereignty.

Obsessed by the defense of the nation's eastern frontier since the 1870s, after the 1989 collapse of the Berlin Wall the army suddenly found itself with no threat to the nation, but France still ventured to maintain its international stature and ascendancy next to the economic powerhouse of a reunited Germany. The outbreak of the Gulf War in 1990 forced these contradictions into the open. Barred from most operations outside France since the early 1960s, conscripts could not be sent to the region without the approval of the National Assembly. Faced with fierce public opposition to military action, French President François Mitterand saw little advantage to a debate over the question. Determined not to leave France on the sidelines, he decreed that only professional troops would fight. The army scrambled to cobble together an improvised all professional force, transferring 5,000 professionals from throughout the army to fill out an expeditionary division of 15,000 men. Once in Saudi Arabia, the French were relegated to a role of relative inconsequence as they depended upon the United States for tactical intelligence and logistical support.

Conscription was ratified in two consecutive Military Programming laws in 1991 and 1994 and in the Defense White Paper of 1994. The white paper confirmed the notion of conscription but marked a strategic rupture deeper than just the end of the Cold War, when for the first time since 1871 military planning and structure were not shaped by some major threat against the national territory. As the traditional notion of war between nation-states had given way to crisis management, French forces were destined to take part more often in foreign interventions, so military missions have evolved along two main lines: actions external to the national territory, and inclusion in multinational operations.

As he tried to mobilize new electoral resources vis-à-vis his adversary Prime Minister Édouard Balladur with his Gaullist family roots, Jacques Chirac, then still a presidential candidate in the campaign of 1994–95, suggested modernizing a significant element of the state apparatus by abolishing the draft. On February 22, 1996, Chirac, the newly elected president, utterly steered away from what had been promulgated by French authorities just a few months earlier and mandated full professionalization of all forces by 2002. Three months later, on May 28, the president announced the end of compulsory military service.

Successful reforms need to be compatible with the prevailing self-representations of professionals, so army officials culminated a 20-year-long effort to switch from rigidly defined forces to “modularity” that enable the army to tailor forces to the needs of the moment. The army seized the reins of the reform, making it easier to induce change in the right direction. France's Napoleonic heritage and statist nature underlie an administrative system that is not readily amenable to change (Rouban, 2008). Such a radical shift has been brought about by critical conjunctures whereby distinct causal sequences interacted at particular points in time, opening up possibilities for radical political change.

External forces matched with internal ones to tip the balance in favor of a change that had been contemplated and debated for many years. Europeanization pushed for greater integration, affecting both cognitive and normative frames as well as actors and instruments of public policy (Irondellea, 2003b). Owing to the end of the Cold War together with the Gulf War and an eager-to-win presidential candidate, the time was finally ripe to forgo conscription (De Wijk, 2003; Irondellea, 2003b).

Israel: Militarized Society or Civilianized Military?

The Law and Administration Ordinance, one of Israel's earliest first acts (1948), authorized the provisional government to establish land, air, and naval armed forces, vesting them with the authority to do all lawful and necessary acts for defending the nation (unless stated otherwise, this section is based on Seidman, 2010; Peri, 2006; Cohen, 2008; Levy, 2008, 2010; Bar-Joseph, 2010; Bar-On, 2012; Kober, 2008, 2011). The Defense Army of Israel Ordinance No. 4 supplemented the general provision shortly after. Neither Section 18, which is still in force, nor later statutes have elucidated what the actual roles and functions of the Israel Defense Forces (IDF) are. Since the mid-1990s the Knesset has expanded the language to provide a sound legal basis for the national but not purely military roles that the IDF has been expected to carry out. From the very outset, then, political and military dimensions have been blurred.

The War of Independence between 1947 and 1949 resulted in horrendous bloodshed: 5,800 people were killed in the fighting and over 12,500 wounded out of a total Jewish population of 650,000. One in every five of the war's victims was a civilian, a proportion roughly similar to the UK war dead during World War II. Still, Israel developed nothing like Britain's “Blitz myth” of civilian heroism. The depth of feeling aroused by military associations amidst wide sections of the Jewish population reversed the traditional Jewish reticence toward matters military. Militarism, thus, was not deliberately imposed on an otherwise passive population by the elites, but was rather a bottom-up phenomenon whereby Israeli-Jewish citizens embraced the armed forces as a defining ingredient of their new state.

When the new nation suffered a decade-long austerity regime, the IDF became an indispensable tool in governing the new nation. Faced with enormous challenges, Prime Minister David Ben-Gurion used the military service as a tool to promote government agendas and shape the “new” Israeli, so the IDF took part in all national tasks, providing also ideological underpinnings. Apart from its popularity and legitimizing capacity, the military was perhaps the single most useful agent at the disposal of the civilian government of Israel. The military proved a relatively disciplined, cheap, and practical tool to carry out government policies, since it was composed of unpaid yet well-organized conscripts. IDF engineering units, for example, were used to construct and maintain many of the camps that absorbed newly arrived immigrants.

Since economic considerations ruled out the option of a fully professional army, a Swiss-type militia has been devised based on short conscription terms. The passage in 1949 of the National Service Law mandated conscription and reserve duties, bringing all citizens to bear the responsibility to defend their homeland, not just those who volunteered to assume that burden. Byway of elimination, the IDF's famous “three tier system” has been grounded on compulsory enlistment for every Jewish man and woman, the length of which settled in the 1970s at two years for women and three years for men. The army's core was a small regular army consisting primarily of conscripts, with the officer corps and part of the professional echelon staffed by career personnel. The large reserve army was composed of conscripts obligated to undertake several weeks of reserve duty every year in order to maintain their fitness as soldiers in case of war.

For the most part, the three IDF branches have been highly unified. Although the navy and air force have had separate headquarters and different uniforms, the general staff has served as the operational directorate of all the elements of the armed forces, and the chief of the general staff (CGS) has been the nation's senior officer and the supreme military commander. The CGS has always been one of the most powerful and potent persons in Israel, possessing a capacity for political influence greater than that of his counterparts in other democracies.

Mass compulsory recruitment has been the cornerstone of the Israeli version of statism that has endowed the army with favored symbolic status as a universal and depoliticized military that stands above sectarian divisions, and tied a Gordian knot between soldiering and citizenship. The purported threat to Israel was discursively intensified, and the army took on the roles of nation builder and melting pot. The IDF was consolidated and spearheaded by the dominant social group of middle-class secular Ashkenazi men, so the very group that had founded the army populated its senior ranks and was identified with its achievements. Peripheral social groups and in particular the Mizrachim enhanced the army quantitatively but were perceived as unable to shape its qualitative values. Women were marginalized as well since they were relegated to auxiliary roles. Apart from its symbolic meaning, military service demarcated the boundaries of society, becoming a decisive standard whereby rights were awarded to individuals and collectives alike.

Israel's post-Mandate intelligence community was established in the summer of 1948, in the midst of its War of Independence. The community consisted of three main information services: military through the IDF, domestic through the Prime Minister's Office, and foreign-political through the Foreign Office. The prime task of the military intelligence service was defined as the collection and analysis of information about the Arab world and in particular its armies. The foreign political service was assigned a comparable responsibility, which was to collect intelligence about the rest of the world, primarily Europe. However, in 1950 growing concern with the quality of information that the Foreign Office was providing led military intelligence to violate the agreed division of labor and to start intelligence gathering in non-Arab countries as well. In 1951, bitter bureaucratic rivalry had put an end to the Foreign Office's intelligence service, after which the Institute for Intelligence and Special Roles (the Mossad) was forged under the auspices of the Prime Minister's Office. From 1953 onwards, the Foreign Office had no longer gathered and/or assessed global and regional strategic intelligence, so the military intelligence became the Directorate of Intelligence (AMAN, Agaf Modiin) in the IDF GHQ, filling the vacuum by becoming Israel's sole intelligence estimator.

Most scholars have deemed the Six Day War (June 5–10, 1967) the event that triggered the expansion of the army's role in national security affairs, granting it an extended share in policy making. Some of them even go as far as characterizing the behavior of the Israeli generals prior to the war as a “putsch” or a “revolt.” AMAN failed to foretell that a war was imminent based on a false conception. On the other hand, the move of the best-trained and equipped Egyptian armored division to the center of Sinai on May 25 was interpreted as an indicator of offensive intentions. The related controversy pitted generals against Levi Eshkol (the prime minister and minister of defense) and his ministers when generals pushed for a preemptive offense, whereas the political echelon endeavored to buy time while exhausting diplomatic efforts. As generals reprehended the politicians' hesitant vacillation, the criticism filtered through the mass media to the public. During those tense days Ben-Gurion, the octogenarian leader, expressed a number of times his apprehension that a military coup might take place, but an out-and-out revolt never materialized as Eshkol finally caved in to the public's and the media's outcries and ceded the defense portfolio to Moshe Dayan. By then, Eshkol was ready to yield to his zealous generals after all political and diplomatic ventures had come to nothing, but his successor finally gave the order and set an onslaught in motion. In the wake of the 1967 war, Arab armies were destroyed and large territories were occupied. The “existential threat” idea has given way to the motif of “security borders.”

The IDF had a decentralized and mission-oriented command system hinging heavily upon IDF commanders' resourcefulness and improvisation skills. However, a good mission-oriented command should not count solely on resourcefulness and improvisation, but also on a thorough educational and training process whereby all commanders acquire the same set of professional tools. Yet, IDF commanders' lore has been wanting to the extent that the IDF has been accused of having a “bad anti-intellectual tendency” because the stronger Israel has become, the more the IDF has relied on its muscle rather than its brain. Yaacov Hisdai, a senior researcher for the Agranat Commission of Inquiry that investigated the circumstances leading to the outbreak of the 1973 October War, concluded that the IDF commanders under consideration lagged in innovation, abstract thinking, and a sense of criticism. The post-1967 War hubris and the “aura of prestige” on the Israeli side overshadowed Israel's decision makers with regard to the military.

Since the 1960s and 1970s the relatively few nonsabra IDF commanders, some of whom were interested in the intellectual aspects of the military profession, gradually disappeared from the officer corps, leaving the stage almost completely to the sabras. The sabras, the native-born Israelis, have a practice-oriented mentality, which results in strong performance-oriented behavior. As the years passed, IDF commanders accumulated much experience both in high-intensity and low-intensity conflicts, developing an experience-based coup d'oeil. Good intuition had often led to good decisions or success on the battlefield, which strengthened the feeling among IDF officers that there was no need for formal learning. Unaware of the whole spectrum of opinions presented by military thinkers with respect to the question of offense versus defense, IDF commanders opted mainly for offense as their preferred form of war. However, when it came to low-intensity conflicts such as Israel's War of Attrition with Egypt (1967–1970), the IDF was wise enough to apply a more balanced, defensive/offensive approach, as required by the nature of the challenge.

Following AMAN's failure to warn that war was brewing in early October 1973 and the subsequent high cost that Israel paid for this oversight in the Yom Kippur War, an official investigation, the Agranat Commission, recommended ending the military monopoly and establishing analytical intelligence organs in both the Mossad and the Foreign Office. Consequently, in 1974 the Mossad set up a strong research body (the Directorate of Intelligence) while the Foreign Office's analytical body, the Political Research Department, remained relatively weak and unimportant. Despite the fact that AMAN's Research Division lost its status as the sole estimator, it has maintained its seniority and serves to this day as Israel's senior intelligence estimator. Moreover, Israel is the only liberal democracy in which a military organ, AMAN's Research Division, serves as the leading national intelligence estimator not only in military affairs but also in political, economic, and all other issues considered relevant to the state's security. AMAN chiefs interact intensively with Israeli policy makers, bearing mainly on the prime minister and the minister of defense. Other developed democracies regard such an arrangement as unhealthy for democratic life, primarily since it imbues the military with too much power, weakens the power of civilian institutions, and blurs the border between civilian and military authority.

Against the backdrop of performance problems revealed during the 1973 October War and the 1982 Lebanon War, a revolutionary curriculum was launched in 1989 at the Command and Staff College named Barak (“lightning” in Hebrew). The program aimed at higher-rank commanders, integrating for the first time theoretical, historical, and doctrinal knowledge and offering tools for improving doctrines, plans, and operational performance. Nevertheless, in 1994 Barak was closed. Its instructors, who had been carefully selected from within the IDF as well as from academia, were blamed for “elitism” and released from their jobs one by one.

As early as the sober aftermath of the Yom Kippur War, social attitudes toward the IDF started to shift. Throughout the cultural and economic globalization of Israeli society that had started in the 1980s, neoliberal doctrines loomed large. In the 1980–2006 period military spending as a proportion of GDP dropped by more than 50 percent while GDP rose by about 200 percent. The army's role in defining social hierarchy eroded, giving way to other considerations such as individualism, privatization, competition, and efficiency. Individual achievement superseded the test of statism.

Reserve duty became a heavier duty in both absolute and relative terms, as it hampered soldiers both from effectively contending in an increasingly competitive labor market and from fulfilling their roles as fathers within a more equal division of labor in the family. Those bearing most of the burden pressured to lessen military sacrifice or to augment the rewards for it. Motivation to embark on combat duty has waned, giving rise to a “motivation crisis syndrome.” Rather than standing above the market or even competing with it, the IDF was gradually subordinated to the market. The Economic Stability Plan of 1985 subsumed, inter alia, a reform in the reserve army. Its main thrust was to transfer the cost of reserve duty from the National Insurance Institute to the army. Attaching a price tag to the service of reserve soldiers had instilled economic considerations in the army, which reduced the number of overall reserve duty days. A semiselective recruitment model has been devised for the first time, endeavoring to compensate for lost reserve duty days with quality of reservists.

After the reserve service had deviated piecemeal from universalistic principles of an inclusive people's army, the military service at large soon followed suit. Military service had been beyond debate for many years but in 2010 less than 75 percent of Jewish males of draft age (Palestinian citizens have been exempted from service) enlisted in the IDF; these declining conscription rates attest to the subjugation of military values to the market. By the mid-1990s, the IDF was no longer a sacred cow as the Lebanese imbroglio and the two intifadas ruffled the previously prevalent equanimity of Israel's cultural elite. Generals began to attract as many iconoclasts as idolaters. A stream of military-related films, novels, plays, and collections testified to the growing strength of the new mood. The rule of silence, which had formerly upheld military immunity to parental criticism, was broken as bereaved parents, sometimes alone and sometimes in groups, demanded officers to account for the death of soldiers in military accidents, trying to thwart their promotion. At the same time, many watchdog organizations have started making headway, publicizing and legitimizing public concerns about the conduct of the IDF. Correspondingly, the mass media has started to be more exacting and fierce concerning once tabooed, hushed-up security matters.

A further threshold was crossed in the wake of the 2006 Second Lebanon War when, in marked contrast to their forerunners in 1973 and 1982 (Yom Kippur and the first Lebanon War), bereaved parents called for accountability not only of the political echelon, but also of the CGS and additional senior officers, demanding their resignation. Many scholars impute the erosion of the IDF's consensual status to the predicament of the Second Lebanon War where the IDF did not achieve a battlefield decision against Hizballah and even more so to the protracted Low Intensity Conflict (LIC) with the Palestinians.

In his testimony before the Winograd Commission (2006) erected to delve into the Second Lebanon War debacle, the defense minister, Amir Peretz, admitted that he had seen the chief of staff and the general staff as his “number one advisory body” on operational matters. Therefore, having a staff at the Defense Ministry would only be considered by the military a “super general staff.” The committee also excoriated anti-intellectualism in IDF's general staff, pointing to the correlation between improvisation, which became part of Israeli culture, and lack of professionalism.

The Israeli system ailments of disdain for knowledge and theory and the IDF's monopoly on assessment seem to persist in the face of strong grounds dictating otherwise. However, some changes do take place in the IDF, one of which is privatization in line with the recommendations of the Brodet Committee appointed in 2007 to examine the defense budget. Military kitchens, vehicle maintenance, and soldiers' transportation services, construction and maintenance of physical plants, and various instruction courses ranging from driving courses for Hummer vehicles to pilot instruction constitute only part of contracted-out undertakings. This process will continue apace in keeping with global trends, as it is more cost effective to do so than to retain those functions inside the military.

United States: Resting on Its Laurels—FEMA's Vicissitudes from Ignominy to Luster and Vice Versa

Federal aid to disaster-stricken citizens dates back to the San Francisco earthquake of 1906 (unless stated otherwise, this section is based on Coyne and others, 2009; Irving, 2008; Perrow, 2005; Roberts, 2006; Schneider, 2005; Irons, 2005; Garnett and Kouzmin, 2009; Jenkins, 2010; Jurkiewicz, 2009). President Theodore Roosevelt was alarmed by the disaster and sent federal troops to help, but local officials and not federal authorities were always in control albeit unofficially. The first broad legislation defining federal authority in disasters was the Civil Defense Act of 1950 that centralized programs for defense against nuclear attack. Federal involvement in natural disasters was mostly ad hoc and too little too late. A series of ferocious natural disasters in the 1960s and 1970s caused great destruction, tipping the balance in favor of greater federal involvement. As legislation expanded, agencies had been organized and reorganized so that by the late 1970s a multitude of agencies sometimes worked at cross-purposes. The corollary fragmentation at the federal level called for a comprehensive emergency management policy to coordinate federal, state, and local responsibilities.

President Jimmy Carter responded by creating the Federal Emergency Management Agency (FEMA) in 1979 in one of his last attempts to restructure the federal government. Reorganization frustrated attempts to rationalize and streamline disaster policy. Carter's authority was limited and in order to set up the new agency without congressional opposition he transferred staff, political appointees, and procedures from existing disaster organizations. FEMA's primary goals were disaster relief, prevention, and mitigation. The secondary ones were coping with a nuclear attack and national security, something normally in the hands of other agencies.

As the Cold War intensified in the early 1980s, President Ronald Reagan gave FEMA a renewed civil defense mandate; the first goals were neglected and starved of resources while the secondary ones flourished. Within FEMA, the small division of the National Preparedness Directorate (NPD) was ordered to develop a classified computer and telecommunications network to secure continuity of government in the event of a nuclear attack. The network was developed by the National Security Council (NSC) and subsumed into the broader Department of Defense (DOD) under the national defense information network. Though initiated by FEMA and drawing upon more and more of its budget, only the DOD and the NSC could access that network while FEMA's disaster relief personnel could not. FEMA had developed one of the most advanced network systems for disaster response in the world, yet none of it was available in emergencies or during natural disasters. In addition, Congress could inspect neither the activities nor the budget of the civil defense part of FEMA.

In 1985, when the Justice Department prosecuted FEMA for cronyism in awarding contracts, fraud, and mismanagement, the head of FEMA, Louis Giuffrida, was forced to resign. The organization continued to ignore natural disasters, and when disasters came the personnel were poorly trained and funded, overwhelmed, and quite possibly inept, so that when disaster response went awry, politicians chastised the agency. Dissatisfaction with FEMA culminated in 1992 after Hurricane Andrew caused about $30 billion in damage in southern Florida, leaving 160,000 people homeless. The primitive communication system of the agency forced it to buy Radio Shack walkie-talkies in last-minute preparations, while the state-of-the-art system FEMA had paid for remained unavailable. In the midst of the chaos, President George H.W. Bush replaced the FEMA director Joseph Allbaugh and ordered Andrew Card, the secretary of transportation, to take charge of the recovery, calling in federal troops to assist. In 1993, the General Accounting Office pointed the finger at FEMA for failing to prepare states and localities for the great demand for food, water, transportation, medical care, and law enforcement, and for failing to reduce the vulnerability of the dense south Florida population. FEMA's poor performance in response to Andrew was allegedly enough to cost Bush reelection.

The agency's new director, James Lee Witt, was one of the many “friends of Bill” who accompanied Clinton from Arkansas to the White House. Witt had never attended college but had substantial experience in emergency management and extraordinary political skills. Hinging upon a battery of reforms suggested by academics and professionals, Witt successfully lobbied Congress and the president for patience and support, convincing even the most skeptical members of Congress that FEMA could work to their advantage if it provided constituents affected by disaster with an immediate and effective response for which politicians could receive credit. Some national security functions were eliminated, and many positions that had made FEMA a dumping ground for political appointees with little emergency management experience were annulled. Following bold promises with bold actions, Witt hired deputies with experience in responding to disasters and adopted the recommendations of two expert reports that counseled a more streamlined approach to natural disasters. Witt trimmed FEMA's Cold War inheritance and built the foundation for a legacy of hazard mitigation and crisis management. The intellectual centerpiece of the reorganization was the “all hazards, all phases” approach that deemphasized the agency's national security responsibilities while preparing for and responding to a range of natural, industrial, and deliberate disasters. During the 1993 reorganization, more than 100 defense and security staff were reassigned to other duties, and nearly 40 percent of staff with security clearances had their clearances removed. Practice reflected the organizational changes. While responding to floods in the Midwest in the summer of 1993, FEMA used mobile communication vehicles that had previously been reserved for national security. Ensuring that FEMA could respond more quickly, Witt allowed the agency to put people and equipment into place before a disaster struck. Response improved, as the agency cut much of the red tape.

Witt erected a new Mitigation Directorate to reduce loss of life and especially damage to property by encouraging people to reduce risk and vulnerability before disaster struck. Under Witt's strong leadership during the mid- and late 1990s, the morale of FEMA employees was vastly improved, cultivating the agency's relationships with external constituencies. Under President George W. Bush FEMA was about to privatize disaster response and counterterrorism rather than natural disasters. A watershed was yet to come once more, putting a spoke in FEMA's wheel.

The 9/11 attacks on the World Trade Center and Pentagon led to major changes in the U.S. federal government. Perhaps the biggest change was the creation of the U.S. Department of Homeland Security (DHS) in November 2002. The DHS consolidated organizations related to U.S. homeland security into a single cabinet via a massive reorganization, aspiring to increase the efficiency and responsiveness of government agencies in preventing and responding to future terrorist attacks and disasters that threatened U.S. security. In February 2003, FEMA was placed under the authority of the DHS along with 20 other agencies. FEMA became a component of the Emergency Preparedness and Response (EP&R) Directorate in the DHS. FEMA moved intact to the DHS, and most of its operations became part of the EP&R Directorate, whose mission was to help the nation to prepare for, mitigate the effects of, respond to, and recover from disaster. Nevertheless, after it had been consolidated into DHS, FEMA was reorganized numerous times, and some of its premerger functions were moved to other organizations within the DHS. FEMA no longer had direct congressional oversight as before; rather, it was placed under the DHS—the third-largest department in the U.S. federal government. A web of oversight committees in Congress added bureaucratic layers with overlapping areas of oversight.

FEMA lost the cabinet status President Clinton had given it, top personnel left, and the remaining employees were so demoralized that the GAO rated its morale as one of the lowest of any government agency. More and more specialization and expertise dwindled away as both directors and personnel had no prior experience with crisis management or disaster relief.

Before Katrina hit New Orleans, FEMA already considered the likely damage from a strong hurricane hitting the city to rank in the top three potential catastrophes facing the nation. Moreover, a 2004 tabletop exercise on a hypothetical Hurricane Pam hitting New Orleans pointed to some significant weaknesses in the readiness of authorities to respond. Unfortunately, a scheduled follow-up exercise on evacuating New Orleans was aborted due to lack of funding. The 121-page plan that emerged from the aborted exercise left many issues to be determined.

When Katrina struck Louisiana on August 23, 2005, response was slow, uncertain, and inconsistent. Local government units were overwhelmed with the magnitude of the disaster. After the levees had broken, Louisiana Governor Kathleen Babineaux Blanco did ask for additional resources from the federal government, but regardless of the chaos, she refused to declare martial law or a state of emergency. The governor also declined a proposal from the White House to put National Guard troops under the control of the federal government. Public agencies were unable to stabilize local conditions or mobilize resources to get immediate assistance to disaster victims, thereby producing anomic conditions and a general breakdown of the social order. Throughout the disaster, state and federal agencies worked independently under their own initiative, sometimes at cross-purposes. Even before Katrina struck, FEMA's director, Michael Brown, had planned to resign from FEMA as he had been exhausted by turf wars within the Department of Homeland Security. Several congressional committees investigated FEMA's performance. Following a thorough questioning by a House panel, Brown resigned.

The Katrina calamity reformed emergency management. Under the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act), FEMA has been required to establish a National Preparedness System to guarantee that the nation has the ability to prepare for and respond to disasters of all types, whether natural or man-made, including terrorist attacks. Greater responsibilities have been vested in individuals and communities based on the premise that resilient communities that can quickly recover from a disaster begin with prepared individuals and hinge on the leadership and engagement of local government and other community members. FEMA's Citizen Corps and partner-program officials encourage state, local, regional, and tribal governments as well as private and nonprofit community-based organizations to establish and sustain local Citizen Corps Councils and partner programs through federal funding for local efforts. Since FEMA had been stripped of many of its functions and authorities, FEMA officials had their hands tied and could offer locals nothing but advice when Katrina was running rampant. The Post-Katrina Act transferred many preparedness functions and mission responsibilities back to FEMA. Legislation has strengthened centralization with respect to emergency management and national security, threatening historical federalist relationships. Presidential authority, for instance, to use the armed forces for domestic purposes such as natural disasters, public health emergencies, or terrorist attacks has been expanded. Legislation has given the president power to call the National Guard into national service even though the National Guard was traditionally under the jurisdiction of state governors, but also provided municipalities, communities, and individuals with necessary resources to cooperate, take action, and master their own fate.

A report of the General Accounting Office found that although FEMA has improved its management of disaster-related resources, financial management has been arbitrary and inconsistent in terms of cost estimates because of a coding system that has assigned a simple code to every disaster. Such one-dimensional coding has rendered learning from previous experience impossible. The lore concerning what works best in both planning for and mitigating the consequences of disasters on the scale of Katrina has grown exponentially, leading to some very clear directives for both policy and practice. The roadblocks to further progress have been more often political and cultural rather than resource-based. Efforts need to be extended beyond resource delivery and management to include political, cultural, and social intervention. The post-Katrina wearisome recovery has been imputed, inter alia, to the notorious style of politics and cultural elements of Louisiana. With regard to the human factor, politics, culture, ethics, self-interest, and protection of one's scope of influence above rational choice pose a challenge in advancing the effectiveness of disaster management.

Internal Security: Enforcing Law and Order

Maintaining order and safety within a territory has been a public task for millennia. Initially this was done by means of the military; most polities had very few officials whom we would identify today as “police.” Well into early modern times, police were represented in an official who is similar to a sheriff. Sometimes he had a few deputies; more often citizens were asked to help out in case a criminal needed to be caught (think of the posse). The professional police force started in the 1830s in England, and became widespread in the Western world from the late nineteenth century on as part of the government response to industrialization, urbanization, and rapid population growth. Police forces were initially modeled in the command-and-control style of the military, but gradually became more civilian oriented. This is especially visible in the emergence of community policing that aims at involving citizens and other nonstate actors in the coproduction of security. With regard to the cases in this section, Colombia has introduced community policing, but it is not fully developed yet. South Africa introduced community policing in the mid-1990s, but it is not quite engaging citizens in securing their own neighborhoods. A trend anywhere on the globe is the professionalization of police forces, making them distinct from the military.

The UK case in this section is focused on the prison system, which is very much a part of domestic policing policies. As with standing armies and police forces, the prison system is the creation of modern times. In early modern times, incarceration was only temporary until trial. Lengthy prison sentences are a feature of the modern world, and so is the prison system. Generally, prisons are government run as an expression of the state's monopoly over the use of violence. However, and following the United States, the United Kingdom has experimented with partially privatizing its prisons, but the outcomes are not satisfactory (as in the United States). Canada has two private prisons, and Israel tried to introduce them in 2004, but it was struck down as unconstitutional in 2009. Generally, prison guards and probation officers have professionalized as well.

Colombia: A Quagmire of Guerillas, Drug Cartels, and Paramilitaries—Demilitarization Bogging Down

To many states in Latin America police reform was considered a sine qua non to economic development and the quality of democracy. It had been challenged by a legacy of military control over police forces and by skyrocketing crime rates that caused many police forces to revert back to repressive measures (unless stated otherwise, this section is based on Frühling, 2003; De Francisco, 2006; Llorente, 2006; Ruiz Vásquez, 2012; De la Torre, 2008).

The objectives of police work include crime prevention, immediate response to incidents that directly threaten the security of citizens, investigation of crimes and accidents in a given jurisdiction, and traffic control. In Colombia, regardless of its strong democratic foundation, society and the government have implored the police to exceed these objectives against a backdrop of harsh realities with armed conflicts with guerillas, paramilitaries, and drug traffickers.

The Colombian police force has its origins in the National Police Corps founded in 1891 under the Ministry of Government to maintain law and order in Bogotá. Merely two years after beginning its duties there had been a popular uprising against this nascent police force that even destroyed some of its facilities. The stage had been set for the disdain, fear, and apprehension that Colombians felt about their police force for years. After the force had been established, the first decades were punctuated by periods of partisan conflict, the most notorious of which was the Thousand Days War (1899–1902). From 1930 to the end of the 1950s the National Police was directly involved in the conflict between the Liberal and Conservative parties, condemning it to be an instrument of interparty conflict. The bloodshed culminated in the period known as La Violencia (the Violence) from 1948 to 1953: Once power changed hands from one party to the other, there would be a massive purge of opposition police officers and a recruitment of fellow party members.

Absent sufficient resources to expand law-enforcement coverage into Colombia's many regions, police forces were locally forged in the Colombian departamentos (provinces) and municipalities while answering to governors and mayors and operating in relative autonomy from central government. During the dictatorship of the military government headed by General Gustavo Rojas Pinilla (1955–1958), the police force was placed under the Ministry of Defense (at that time, the Ministry of War), furnishing it with new personnel, most of whom were drawn from the Army and were untrained.

Since the cessation of the mid-twentieth-century violence, Colombian society has been preponderantly civilian in nature. The subordination of Colombia's armed forces to civilian control has been the product of the role the army assumed during La Violencia rather than of the military's control of the government. During the military government, the army took control of the police force and militarized its organization and training of personnel; in that context the police force was recast as a centralized public entity. Although Colombia's first police school had been inaugurated in the 1930s, professionalization only took off once the force was consolidated as a national-level service. The career ranks of high-ranking and midlevel officers were reorganized in the mid-1950s based on military norms. One objective was to give high- and midranking police officers parity with army officers.

As far as 1991, both the armed forces and the National Police were led by the top army general under the auspices of the Defense Ministry. The Frente Nacional (National Front), which was a bipartisan agreement in force from 1958 to 1978 enabling Colombia's two traditional parties to alternate in power at the national and regional levels, pacified the country. This allowed police units to consolidate under a single national force and as of 1965 for the first time a high-ranking police officer rather than an army general was appointed as its director. Between 1964 and 1966 the judicial police was set up. Given the rise of multiple guerrilla groups in Colombia since the 1960s, it was arduous to qualify the police as a law-enforcement organization. The capacity of the police to support the army in its mission of national pacification in rural areas was more important than fortifying the National Police as an organization that was supposed to tackle escalating urban crime.

Throughout the 1980s, the National Police grew in size and intensified its urban presence to face the growing rates of urbanization and attendant crime in Colombia's major cities. Endeavoring to make the police more responsive to the urban public, metropolitan police departments offered law-enforcement services in Bogotá, Medellín, and Cali—Colombia's three principal cities. CADs (Centros Automáticos de Despacho), automated dispatch systems, were introduced in these and other large cities to modernize the handling of emergency calls and speed up the dispatching of patrol cars. CAIs ( Centros de Atención Inmediata) were forged first in Bogotá and soon in almost all Colombian cities; these centers, which deployed officers in posts at multiple points around a city, were explicitly initiated by the National Police to improve community relations through decentralization of basic police services.

In April 1981 an antinarcotics unit was created in the Operations Bureau of the National Police, assigning it to lead responsibility in the fight against drug trafficking. The unit received U.S. support, cautiously limited to a few teams and sporadic training courses. The Virgilio Barco administration (1986–1990) had witnessed the escalation of the armed conflict as the National Police targeted hit men and vice versa, and drug traffickers on their part confronted guerillas. Drug traffickers took aim not only at those associated with guerillas, but also at those who represented leftist or progressive positions, an exemplar of which was the mass murder of the members of Unión Patriótica, the leftist coalition. Later on, targeted murder extended to any who opposed the interests of the drug traffickers including journalists and politicians. In April 1989 a Special Armed Unit (Cuerpo Especial Armado) had been erected under the direct supervision of the National Police and was vested with the power to dismantle hired assassins. Those gangsters were nonetheless one step ahead. The government reacted by reinforcing the National Antinarcotic Bureau, which had evolved out of the original unit in the Operations Bureau. By and large, the government was improvising, attending to circumstances as they unfolded rather than planning strategically, neither with short-term goals nor with long-term ones.

In 1991 Colombia promoted a new, more democratic civil and participatory constitution. The new constitution conceived of the National Police as a standing armed civilian body that reports to the national government and is part of the Fuerza Pública (that is, the Armed Forces and the National Police). The 1886 constitution established the army as the nation's sole armed body in existence. The new provision granted legal status to the National Police as well at a time when it was institutionally paralyzed and waned in popularity after a Chief of Police and several underlings had been linked to the drug cartels and formed bands of thieves and kidnappers. Meanwhile, between 1983 and 1992 homicides in Colombia more than doubled, rising from a rate of 32 to 79 homicides per 100,000 inhabitants. Public criticism of police involvement in illicit activities also grew as police gangs formed in several cities to carry out armed robberies and “social cleansing”—a euphemism for a selective killing of criminals, prostitutes, beggars, and the mentally ill.

The César Gaviria administration (1990–1994) forged a new Search Bloc (Bloque de Búsqueda) in the form of a coordination mechanism to make components of crime-fighting agencies pull together while setting forth the National Strategy against Violence (Estrategia National contra la Violencia); the Public Prosecutor's Office, which had been created by the 1991 constitution to direct criminal investigations, was to harmonize with the armed forces, the National Police, and the administrative office of Security (the intelligence agency under the aegis of the presidency).

The Gaviria administration laid down the first reform package in 1991 that called upon civilians to be more active in the design of public policies on national defense and citizen security including related planning and allocation of resources. The National System for Citizen Participation in Police Matters was purportedly designed to allow citizens from all strata of society to provide input on police matters, which would in turn assist police officials in guiding and directing their rank-and-file police officers. For the first time citizen security and national security were severed following a presidential discourse according to which the government finally concocted a strategy that clarified the state's objectives and its internal security policies as well as a scheme to develop the security forces. A civilian police commissioner was nominated to preside over an external office and provide civilian oversight over the National Police. The appointment in 1991 of a civilian defense minister after 40 years of army control over this office served likewise to wrest the police from the military line of command; it vested the police with greater operational autonomy with respect to the army, which was supposed to wield “operational control” over the police only in cases authorized by the minister. The police also gained greater control over their budget, which had formerly been allocated by the military as part of the overall defense sector.

Various regulations paved the way for greater mayoral and gubernatorial discretion and control over policing within jurisdictions, thus decentralizing police services based on local-level security priorities. A new professional career track was engendered in order to shrink the hierarchical command structure and increase the ratio of supervisors to rank-and-file officers. Cadets who were trained during the Frente Nacional (1958–1978) became police chiefs during 1990. From then on, the police force was headed by a Chief of Police who himself was a police officer, and was managed by police officers who had been trained as police, by police, and in police academies, so the police force was able to forge its own institutional identity and interests. Although the reforms laid down by the first reform package have never taken hold, they did lay the groundwork for successful reform efforts later on, and contoured the government's new approach to management of public safety and citizen security and the role of civilians therein.

During a second major attempt to reform the police, most of the 1991 provisions were reasserted and some new ones were ushered in, aiming to restructure the National Police by creating three separate police forces: urban, rural, and criminal-oriented organizations as well as a separate investigation unit. The urban force would have possessed a more civilian profile, while the rural force would have had more of a military profile and get specialized military training in order to operate effectively in regions where guerilla insurgents were based. The main goal was to create less of a military-style command structure, one that was in keeping with international police trends. This restructuring never materialized as the urban orientation that the police had been developing since the 1980s remained predominant. Measures aimed at expanding civilian control over the police fell short of expectations. Measures to strengthen government oversight never got anywhere either. The suggestions sparked resentment among the police leadership, since recommendations had been made by external commissions from outside the National Police organization. Moreover, most high-level and midgrade officers refused to embark on the new career path, since it would have negatively affected their prestige and privileges; thereby they remained in the old path.

The presidency of Ernesto Samper (1994–1998) was plagued from the outset by allegations of bribery from drug cartels, as a result of which Samper was unable to continue with his predecessor's efforts to assert civilian control over the police. However, he did appoint the General Jose Serrano to head the police. Serrano was renowned for leading an aggressive and ultimately successful campaign against drug cartels. After the Medellin cartel had been vanquished in 1993 he also gained credibility with the U.S. government. In 1995 he applied a presidential decree to purge more than 8,000 police officers who were thought to be involved in illicit activities.

Serrano's reform efforts included several smaller initiatives that fell under the umbrella of cultural transformation of the police. This aimed at giving the police a more civilian-oriented style with better leadership, an emphasis on improved management, and better personnel training in line with the notion that citizens were “clients.” Serrano introduced police surveys intended to gauge police efficiency in areas of response and crime control, and also serve as a method for the police to improve in areas deemed deficient. Serrano also initiated a neighborhood watch program adopted from a British model. The Colombian model was named Frentes de Seguridad Local (Local Security Fronts) and Escuelas de Seguridad (Security Schools). The community fronts went from 2,700 in 1995 to more than 6,800 by 2001. In Bogotá and Medellin, two of the country's three biggest cities, this endeavor appeared to be producing positive results in terms of crime control and a positive image for the police.

Probably the most critical part of this reform effort was the development of a strategic planning board that consisted of all the top commanders from every department. It was to meet on a regular basis in order to measure progress and plan for the future at a regional and national level. The police developed contracts with various Colombian universities to teach top commanders courses in a more civilian-oriented form of leadership and management. Serrano envisioned a more horizontal relationship between senior commanders and mid- to low-ranking officers whereby assignments would hinge on ability, but senior officers resisted the venture to replace the quasi-military steep structure.

Andrés Pastrana's administration (1998–2002) launched Plan Colombia with U.S. support. Grasping the degree to which drug trafficking could affect society, the time had come not only to dismantle drug-trafficking organizations, but also to resolve the illegal crops problem. Thousands of hectares have been destroyed since the initiative took off. The fumigation policy strengthened the National Police force in its confrontation with the drug traffickers as it added to its capacity to tackle crime organizations.

A special commission was appointed in late 2003 by the minister of defense to analyze the functioning of the internal and external police control mechanisms and to make recommendations to enhance them. The commission imparted its concern for the lack of a comprehensive police integrity policy that should be the motor of any reform that has to do with police accountability. Serrano's assiduous measures did seem to bear some fruit because when compared to other Colombian government institutions, the police has had the largest increase in public confidence for the past decade, with the armed forces coming in a distant second. Community policing has been going hand in hand with legal and institutional changes in Colombia as the 1991 constitution has embodied citizen participation. In accord with present-day policing trends the private sector has urged the police force to modernize its policies and improve its image. The community police program began operations in 1999 following an elaborate strategic model that included defined actions, indicators, and results to be obtained such as a closer relationship with the community, peaceful coexistence, crime deterrence, and excellence in service. The community police officers were immediately recognizable for their use of bicycles and distinctive vests. However, Colombia has never fully implemented a real community police model leading to a transformation of organizational structure, conduct, and culture.

Since 1996 the new community police force has adopted the Local Safety Fronts program, which impelled residents to organize within their neighborhoods, install alarms, and exchange telephone numbers in order to alert each other and the authorities when suspicious activities occurred in the neighborhood. By the same token this new division assumed the previous “safety schools” program to train neighborhood leaders and residents about organization and community involvement as it relates to block security. In 2006 there were 9,686 Local Safety Fronts in Bogotá, amounting to 15 percent of the city's residents and 11 percent of its households. However, the majority of Safety Fronts and police representatives have not come together on a regular basis to tackle security issues.

The community police force in Colombia has been one of the largest in Latin America, but it has been insufficient to fully establish a community model. Perhaps the strongest aspect of community police in Colombia has been its officer training, taking place in both private and public universities. However, this program met with opposition by mid-ranking police officials who wanted the training to be carried out as before, in traditional police academies. Certain sectors within universities resisted the training program as well because they did not want their students mixed with the police either by reason of differences in social status at private universities or due to an aversion to an institution that had historically repressed student demonstrations at public universities.

The community police has had to take on duties unrelated to activities initially planned for it. High-ranking officials have always used the community police to cover gaps in service due to lack of personnel, so the community police have been used to recruit paid informants and in the war against terrorist activities (especially guerilla ones). Inasmuch as initiatives whereby citizens protect themselves have been commendable, such undertakings have had a sad history in Colombia with the formation of self-defense or paramilitary groups.

The Colombian police force has become ever more independent of all types of ties and as a result more difficult to control. It was difficult to implement a Chicago-style community police force in Colombia because it required citizens and neighborhood residents to oversee and evaluate police actions in a way that might have limited the autonomy of mid-and high-level officials. High-ranking officials' fear of losing power to the discretion of their subordinates, the lack of acceptance by other beat officers who considered themselves tough compared to what they believed to be a soft type of policing, ineffectiveness in coping with high crime rates, and insufficient neighborhood and area coverage were among the factors that foiled the community police model in Colombia. Institutional centralization, on the other hand, has prevented beat officers from acting autonomously within their sectors to create their own programs. In an attempt to resolve more pressing problems, the community policing has been somewhat marginalized: The National Police has suffered a reduction in human resources and budget due to internal conflicts, narcotics trafficking, and organized crime. Regardless of cutbacks, it has had to assume numerous additional duties such as protecting important people, all of which has limited its potential scope of action.

Community policing has become the face of the police when it comes to public activities of a more convivial nature such as building bicycle paths and schools for underprivileged children in the unauthorized squatter neighborhoods in several Colombian cities, mainly in Bogotá. Local authorities have tended not to intervene in these areas where charismatic officers have worked in conjunction with private businesses to install basic public services, precisely because these neighborhoods have not been legally recognized by the city. Although Colombia has failed to follow community policing in its entirety, the model did gain some ground. Unlike the paralysis of previous decades, the Local Safety Fronts and similar initiatives involve the citizenry in various undertakings whereby they can join forces with the police.

South Africa: Post-Apartheid Community Policing—Transmuting the Police Force to a Police Service

The oppression of Indian, Chinese, and African workers in the early 1900s and sporadically of white industrial workers in 1913, 1914, and 1922 by the South African police embodied the practices of police force acting on behalf of an aggressive imperial capital (unless stated otherwise, this section is based on Brogden, 1989; Van der Spuy, 1989; Malan, 1999; Marks and Fleming, 2004; Marks and others, 2009; Rauch, 2001; Masuku, 2005; Phillips, 2010). In no other British dominion was policing so remarkably an agency of one particular class acting against its opponents. Absent distinction between police and soldiers like nowadays, policing was haphazard and uneven prior to the formation of a national police force in 1913. Dual-purpose units that carried out both military and police tasks were the rule rather than the exception. The Frontier and Mounted Police (FAMP), for example, was such a dual-purpose unit policing the outlying rural districts of the Eastern Cape. Frequent border disputes alongside frontier wars and black “risings” prevailed throughout the Eastern Cape, lending support to the military (as opposed to the police) function of the FAMP.

From the eighteenth century up to the early 1990s, policing in South Africa was embedded in a web of institutional practices whereby the imperial state legitimated its sovereignty over the colonial territory. As such, it had operated as an internal army of occupation acting mainly on behalf of the white incomers and their descendants against the indigenous population and against nonwhite migrant labor. After the war in 1945, a concerted effort to reconstruct the public service as a whole subsumed an ambitious three-year plan to upgrade the South African Police (SAP) force. The SAP had been taking orders from the government of the day and from 1948 onwards, when customary segregation came to be cast in a rigid legal form, it became the enforcement arm of the apartheid regime.

One of the most visible symbols of the grand apartheid was the creation of the homeland system (1960s and 1970s) that segregated black South Africans into ethnic groups, assigned each group a small piece of land, and created some form of administration for each homeland. The “grand apartheid” made sure there would be no black South African citizens by forcing all nonwhite Africans to exchange their citizenship for that of a so-called “independent state” based on racial and ethnic segregation. The “independent” homelands were entitled to issue passports, create defense forces, attempt “foreign affairs,” etc. These self-governing homelands and independent states deployed bodies of armed men to maintain law and order. As primary enforcers of discriminatory measures, the police became the immediate symbol of oppressive rule.

The incredible democratic transition in South Africa has required significant police transformation even though it did not require a UN peacekeeping presence in the field. At the time of Nelson Mandela's release from prison in 1990, there were 11 police forces in South Africa, each constituted under its own piece of legislation and operating within its own jurisdiction. The largest force was the SAP with approximately 112,000 members geared toward the needs of the apartheid order; the other 10 were the “homeland” police forces. Street-level policing was conducted in a heavy-handed manner with bias against black citizens and little respect for rights or due process. Criminal investigations were largely reliant on confessions extracted under duress, and harsh security legislation condoned various forms of coercion and torture. Policing techniques were outmoded, partly because of the international campaign that beset the apartheid government. The (new) Interim Constitution for the Republic of South Africa which came into force on April 27, 1994, set up a single national police service out of 11 agencies in operation, democratizing police agencies.

The process of changing from a police force to a police service was formally started in 1995 with the appointment of a national police commissioner to head the SAPS (South African Police Service) along with a new South African Police Service Act that has provided for the establishment, organization, regulation, and control of the SAPS. The government's major national policy framework, the Reconstruction and Development Program (RDP), also bore on the goals of police transformation. With “peace and security for all” as one of its six rudiments the RDP determined that the police must be made more representative of the people, more attentive to human rights, and more responsive to the communities they serve.

Beyond a concern with effectiveness and efficiency, international assistance to the police pursued normative transformation by imparting and inculcating international criminal justice standards. It has pushed for reform of internal management practices coupled with a fair amount of politically neutral technology transfer. Unlike regime police, which are primarily concerned with what government requires, democratic police are supposed to respond to citizens' needs and be held accountable for their deeds and omissions through multiple political and civil mechanisms.

The postapartheid South African police set in motion an institutional change in terms of the relations of police to government and other social structures, affecting thereby the purpose, functions, control, and accountability of the police. Venturing to demilitarize and civilianize the new police service, “community-oriented policing” became the cornerstone of official policy in the democratization process. Leaving apartheid practices behind, community policing was seen as a way to improve the relationship between the police and the public, making sure that the police represented and served the interests of the public in their daily work.

In 1997 the Department of Safety and Security published its formal policy document entitled “Community Policing Policy Framework and Guidelines,” presenting community policing as a collaborative, partnership-based approach to (local-level) problem solving. Acknowledging that the objectives of the police can only be achieved through a collaborative effort of the police with other government organizations, structures of civil society, and the private sector, community policing was also conducive to bottom-up governance and civic participation as promised by the new democratic government. Community Policing Forums (CPFs) that had been stipulated in the constitution were established at all but 21 of South Africa's 1,221 police stations by 1997, but the transformation had not always gone smoothly as demonstrated by the Public Order Police Unit.

The African National Congress-led government decided amidst much controversy to maintain a separate, specialized public order unit within the newly transformed SAPS. Undertaking to supplant the notoriously brutal Riot Unit (which came to be known as the Internal Stability Division [ISD]), the unit had been renamed the Public Order Police (POP) unit and became the largest specialized unit in South Africa. New training was introduced mostly in terms of operational procedures, tactics, and equipment. Insignia were changed and affirmative action policies were advocated in an attempt to rectify racial and gender imbalances within the unit. The POP unit was expected to transform itself from a highly militaristic, reactive, and repressive policing body to one that was civilian, communal, service oriented, accountable, nonpartisan, and committed to human rights values. The unit had to change its style from being repressive to tolerant, from reactive to preventive, from confrontational to consensual, and from rigid to flexible in line with international trends in public order policing. Contrary to expectations, structure and command remained similar to those of military organizations. The POP Unit was expected to embrace community policing in terms of both style and philosophy, to consult with community groups about public order problems to jointly decide on ways of managing crowds and public disorder, and provide ongoing reports on operations. No longer could this unit operate unilaterally in response to police-defined problems. New public service legislation stipulated participatory management; the development of performance indicators and evaluations of individual progress and contributions were viewed as integral to effective human resource management and development strategy. Regardless of much ado, behavioral change has been perfunctory. Changes have been slow and difficult mostly in the area of labor-management relations that remained rather autocratic. Evaluation of performance has been ineffective, motivating members only to a limited degree.

Traditional police resistance to change and authoritarian, hierarchical, nonconsultative, and nonparticipative public service ethos, coupled with an inflexible, yet irresolute and uninformed leadership, drove a wedge between top management and rank-and-file members who felt unable to contribute to decision making. Since members felt unsupported in the change process, their commitment to the unit waned. Management practices that remained remarkably unreconstructed failed to cultivate participatory management and directive leadership. Although corporate and new public management practices have had much to offer police organizations, the nature of police alongside entrenched normative schemas with regard to discipline stood in the way.

Marks and others (2009) have pointed out that community policing has eventually become a dead letter. In keeping with global trends, the community policing narrative has become focused almost entirely on ways to mobilize nonstate actors to legitimize and increase the effectiveness of the police, whereas community policing has been immanently about bringing the state closer to civil society in coproducing security. Partnerships and joint working agreements have been made with private security companies. The police have looked upon this burgeoning industry as their “natural” ally and partner, whereas nonstate, more informal civil society groupings have been regarded more as potential threats to security and less as contributors to it.

Police transformation had also to do with improved oversight and accountability to ensure that the organization adhered to the constitution and policies of the democratically elected government. Democratic structures built around explicit values of good governance, transparency, and accountability have superseded apartheid institutions. The Independent Complaints Directorate (ICD), a civilian-run state structure, has been vested with the responsibility to investigate all cases of death inflicted by police actions or while in police custody, and any other allegations of criminality or misconduct brought to it by a member of the public. Although funded by the national Department of Safety and Security, the ICD has been independent of the SAPS, which was required to present an annual report on its performance to the minister of safety and security. A number of other state structures that have been tasked with upholding the Bill of Rights subsumed in the constitution have been able to play an indirect oversight role with respect to the police. The Human Rights Commission, the Commission of Gender Equality, and the public protector have addressed complaints concerning problematic police conduct. A further state structure called the Public Service Commission has researched and evaluated the extent to which the SAPS has adhered to key government policies.

Britain: Integrating Offender Management—Performance, Contestability, and Amalgamation

The Prison Service of England and Wales was part of the Home Office. While the Home Secretary was ultimately responsible and accountable to Parliament for the functioning of the service, one of the junior ministers of the Home Office was responsible for the Prison Service. Prior to 1990 the Prison Board was the strategic layer connecting the political layer and the prisons. It consisted of the director general, his deputy, four regional directors, four headquarters directors, and two nonexecutive directors (unless stated otherwise, information in this section is from Resodihardjo, 2009; Nossal and Wood, 2004; Flynn, 2007; Nathan, 2003; McLennan-Murray, 2011; Talbot and Johnson 2007; Talbot and Talbot, 2013; Lawrie, 2011; Robinson and Burnett, 2007; Hough and others, 2006; Faulkner, 2005).

Despite its avowed neoliberal stance the Conservative government of Margaret Thatcher that had come to power in 1979 was reluctant to privatize prisons. A select parliamentary committee appointed to look into the state of the British prison system visited the United States in 1986 after taking up the idea from a 1984 report by the Adam Smith Institute (ASI). In view of severely overcrowded facilities, the committee issued a report in 1987 proposing an experiment whereby private firms would be allowed to tender for custodial facilities and for remand centers in particular. Meanwhile, congestion increased apace in prisons, many of which had been a relic of the bygone Victorian era.

Those suggestions had been on the back burner until 1990, when a riot erupted in the Manchester Prison colloquially known as “Strangeways.” The £70 million repair bill was only the beginning; by the time the Strangeways riot ended on April 25, 1990, disturbances had broken out in more than 20 prisons. Order was restored only after three people had lost their lives, 133 inmates and 282 prison staff had been injured, and the costs, including those of keeping prisoners in custody in police cells, exceeded £100 million. The subsequent Criminal Justice Act of 1991 set the scene for partial privatization, with regard to which Angela Rumbold, then prisons minister, said: “If, and only if, the contracted-out remand center proves to be a success, might we move toward privatization of other parts of the prison service.” (quoted in Nathan, 2003, pp. 166–167)

The Woolf Report delved into the way the service was organized and suggested some rearrangements while balancing justice, control, and security; it placed the blame for the revolt on inadequate staffing and facilities as well as on poor management. The two mechanisms espoused were the managerial solutions of preference recurrently assumed by Conservative governments: market testing and agencification. In 1992, Strangeways was market tested as its management was put out to tender. In April 1993 HM Prison Service and the Scottish Prison Service were launched as “Next Steps” agencies; in other words, executive agencies standing at arm's length from their parent ministry, the Home Office.

The first contract to manage a prison was awarded to Group 4 Remand Services Ltd. for the management of the newly built Wolds Remand Prison, which opened in April 1992 for 320 adult prisoners. Private companies were invited to run new prisons without a counterbid from existing employees of the Prison Service. By 1996 there were 6 private and 140 public prisons in Great Britain; by 2003 the number of privately operated prisons rose to 10. The success of the market testing caused a change of heart amid the previously tenacious Prison Officers' Association. Since the New Labor's landslide victory in 1997 the Prison Service has had some success in tendering; it retained, inter alia, the Manchester prison and even won Blakenhurst, the flagship of the successful private sector. Over the past two decades prisons have been variously governed: Some have been publicly owned and run with or without Service Level Agreements (SLAs—the public variant of contracts); others have been privately owned and run; and, last, there have been a few Private Finance Initiative (PFI) prisons. Both PFI contracts and SLAs have tended to be as elaborate as possible, specifying desired prison regimes as well as outputs and outcomes such as weekly hours of purposeful activity (work and education) per prisoner and escapes.

According to the National Audit Office (NAO) and most research the private sector prisons in Britain have been neither better nor worse than the publicly owned and run establishments: Both sectors have displayed a complete spectrum from excellent prisons to failing ones. Ten years into reorganization and market testing, reports and inspections have hardly found any area of prison management not in need of improvement since standards, regimes, and accountabilities have been found wanting.

The culture of performance has compelled all prisons including the smallest ones to commit disproportionate amounts of resources into having secretariats/support units to collect, collate, manage, and analyze performance data therefore falling short of prisoners' needs, since those have been seen by HQ via the performance data, as opposed to the reality of prisoners' experience. The chief inspector of prisons (HMCIP) pegged this phenomenon as the “virtual world.”

All in all, no other European nation has commissioned privately financed, designed, built, and operated prisons or contracted out the custodial functions in a prison. In terms of the number of private prisons, the United Kingdom is second only to the United States. In addition to private prisons the United Kingdom has privately operated secure training centers for young offenders, immigration detention centers, prisoner escort services, electronic monitoring programs, provision of a wide range of noncustodial services in publicly run prisons, as well as major programs for privately financed, designed, built, and operated court complexes, police complexes, and probation hostels.

The Probation Rules of 1907 laid the foundation of the ethos of the service to “advise, assist, and befriend” offenders. Probation work rested on social work ideas about intervening in the lives of offenders to help them to avoid crime. It was a highly localized service run by probation committees, and in 1999, it was somewhat of a patchwork consisting of 54 services. The qualification for those working in the service was a social work one, and officers worked in a relatively autonomous way with a professional relationship with the courts. Over the years, the Home Office had attempted to assert control over the service, a matter that became more important when the government decided to change the role of the Probation Service. The 1988 Green Paper Punishment, Custody, and the Community and the Criminal Justice Act 1991 made it clear that not only probation was to become a “punishment in the community” (rather than a social work-based process) but also a punitive noncustodial sentence in its own right, not an inferior alternative to custody.

From the outset, the ascendancy of the New Labor government had witnessed the amalgamation of many previously scattered public entities. In 2000 the Labor government “nationalized” the Probation Service for all intents and purposes, consolidating the Probation Services into a national service for punishment in the community under the auspices of the Criminal Justice and Court Services Act. Forty-two Probation Boards succeeded the erstwhile 54 probation committees within The National Probation Service for England and Wales. Those newly erected boards were also made coterminous with police authority boundaries, prosecution, and courts services. The (National) Probation Service (NPS) became a national service with its entire funding transferred from central government, a director general in the Home Office, and all the members of probation boards appointed by the home secretary. Regardless of talks about a matrix model of management and accountability and of the NPS's National Probation Directorate (NPD) as the hub of a wheel rather than the apex of a hierarchy, the NPD became the de facto headquarters of an unambiguously top-down, command-and-control structure.

The government was very clear that probation represented law enforcement rather than social work because the public primarily wanted offenders punished and made to make reparation for their crimes. NPD produced a suite of national targets and indicators that would compel probation agencies to deliver what the public was said to want and in particular rigorously enforced compliance with the requirements of orders and postrelease licenses. The performance data were reported vis-à-vis the centrally prescribed targets in a “weighted score card” to produce league tables of probation areas. Although resources and activity tended to be directed toward what got measured and this was often processes rather than quality or outcomes, overall this approach succeeded in creating more consistent and improved standards of work such as increased levels of contact with people under supervision and universal enforcement action against those who failed to comply.

At the end of 2003, just two years after the establishment of the NPS in England and Wales, the Carter report (a Correctional Services Review commissioned by the Cabinet Office Strategy Unit) suggested consolidating the Probation Service and the Prison Service into a single National Offender Management Service (NOMS). This was to ensure the “end-to-end management” of offenders, regardless of whether they were given a custodial or a community sentence. Those twin services had hitherto been very poorly coordinated despite dealing with very similar “clients”—that is, offenders— so the Home Office responded with a document of their own endorsing the Carter Review and stating that NOMS would be inaugurated within months.

The 2003 Criminal Justice Act has laid the groundwork for forging the new NOMS along the lines of the Carter report. The new act has redefined the purposes of sentencing pursuant to the policy position that the New Labor government has been advocating for some time, endeavoring to stop the situation whereby offenders fall into the gap between the services. The stated aims were to use resources more effectively and reduce reoffending rates, and at the same time to increase public confidence in the penal system and in the criminal justice system as a whole. This was by no means a marriage of equals: the Probation Service was less than one-tenth the size of the Prison Service when they merged. Apart from amalgamating the Prison and Probation Services into a single service, the commissioning and providing functions within that service were to be severed. Amalgamation took place as of June 2004. Below the chief executive, commissioning was supposed to be the responsibility of a national offender manager working through 10 Regional Offender Managers (ROMs) appointed for each of the nine regions in England as well as one for Wales. Those 10 ROMs were supposed to take over the commissioning role from the existing 42 probation boards. A contestability agenda has been ushered in, officially defined as being about challenging existing suppliers to demonstrate that they continue to offer the best value for money to the taxpayer. Eventually, NOMS has introduced a partial purchaser-provider split in which individual prisons and probation service areas might be put up for competitive tendering.

Those short-lived arrangements were totally ineffective, since ROMs had no control over the budgets for prisons or probation areas; they just managed to raise the level of bureaucracy and to set prison area managers against ROMs. Three home secretaries left the Home Office between 2004 and 2007 under less-than-optimal circumstances (Blunkett, Clarke, and Reid). Following the departure of Reid in 2007, the government had been reorganized, while NOMs became part of the newly formed Ministry of Justice (MoJ).

The Offender Management Act 2007 (OMA 2007) has tipped the scales in favor of public protection even more, furthering services to victims rather than focusing on offenders. Multi-Agency Public Protection Arrangements (MAPPA) have transformed the way local agencies and particularly probation and police worked collaboratively and effectively to manage the highest-risk offenders in the community. Probation liaison teams provided information and support to victims of heinous crimes.

Those developments reinforced the identity of probation amongst its own staff and other agencies as a service working for the “law-abiding majority.” For all of its history up to the mid-1990s there had been a culture of autonomy for probation officers, which had been derived from their historical role as officers of the court with a direct personal accountability to sentencers. The imperative to hit government targets and to deliver only standardized programs of supervision meant that such autonomy was no longer acceptable. Managers began to set clear objectives for their staff based on targets, national standards for practice, and record-keeping requirements, and to monitor their compliance with them. The right to manage has been established so that staff are now held properly to account through formal disciplinary and competence processes if necessary.

By April 2010, the 42 boards were reconstituted via the OMA 2007 as 35 trusts. Those nondepartmental public bodies, arms-length organizations sponsored by the MoJ, deliver services under contract to the secretary of state; they resemble the pre-2001 committees in the sense that they are semiindependent local bodies but with the advantages of a modern service delivery orientation in their ethos, organization, and processes. The OMA 2007 has removed the monopoly provider status of probation trusts. The secretary of state has become responsible for the provision of probation services and able to source them from any provider, which means that in future trusts may have to compete with others to deliver probation services. The payment-by-results approach mandates choosing whichever provider generates the best value for money, whether public, private, or not-for-profit. Probation has become a disciplined and outward-facing service distancing itself from its origins as a social work service for offender clients while accepting that it must demonstrate the value of what it does vis-à-vis public expectations.

The Judiciary System: One State under the Rule of Law

Next to military, police, and prisons, the judiciary is an important element in any state's system of the rule of law. The judiciary initially existed to punish violators of the law, but in modern times it increasingly has turned to mediating in conflict between citizens and in protecting citizens against the power of the state. Where the judiciary initially is a domestic, national affair, it has acquired some international, even global features. Generally, national laws have been standardized and then increasingly adapted to, for instance, international human rights standards. This is a process under way in, for instance, the People's Republic of China (see following), and has been a feature of Western countries since the early twentieth century. Especially with regard to the international and global environment governments operate in today, the United Nations (see following) has come to work in a world more and more dominated by intrastate conflicts, and has been fairly successful in containing interstate conflicts. One of these international human rights issues concerns migration, and Germany is one of those Western countries that has experienced an influx of foreign nationals. While migration is an international challenge (see Arnold, 2010), each country addresses this policy area within the national context and traditions.

The People's Republic of China: The Silent Revolution—Rationalization, Modernization, and Constitutionalization

The rhetoric of the rule of law in China is no longer a ritual formula (information in this section is based on Balme, 2005; Lin, 2003; Gechlik, 2005; Cabestan, 2005). Besides the obvious effects of self-legitimation for the regime, it produces new realities and systemic effects on both institutions and individuals. The law is there because even in a communist or so-called socialist world that shuns political and legal legitimacy such reticence is purely theoretical in the daily exercise of the law. The rebirth of the legal professions in China seems to be producing a milieu that is made up of a majority of individuals who are in love with modernity—liberals in the political sense of the term. Just as illegal practices can survive in a modern judicial system, so the judicial system of the People's Republic of China (PRC) has undergone gradual modernization despite persistent clientelist, illegal, or informal practices. Institutionalized determination has partially disqualified clientelist practices in favor of more modern procedures, endeavoring to constitute norms. Judicial reform has merely been a reflection of other profound changes, observable as much on the level of objective social reality as on that of our understanding of these phenomena. Since China's opening up in 1978, the “socialist market economy” has been ripening; that “seizure of the political by the law” became institutionalized and resonated throughout the world.

During the past three decades, reform trajectories have shifted from commercial law to civil law, then through administrative law and criminal law, culminating in constitutional reform. Upon converting from a planned economy to a market economy, the relationship between individuals and government has changed fundamentally. Some PRC reformers have been interested in testing through the Hong Kong case the possibility of establishing in the mainland a rule of law that would stabilize society, stimulate economic development, and integrate China into the world economy without jeopardizing the leading role of the Communist Party. Whereas in Western democracies the constitutionality of the law is subject to rigorous examination, in China the lack of respect for the constitution is explained by the lack of a clear conception of the position of the constitution in the hierarchy of the normative system. Absent hierarchy of norms and a definition of the sources of the law, confusion has emerged in both organization and denomination of legal texts, administrative regulations, and decrees.

“Traditional” Chinese need for justice and equity harks back to imperial China. In the hands of government officials who enjoyed both administrative and judicial prerogatives in dealing with criminal cases, imperial justice was avoided as far as possible. Since the middle of the nineteenth century, Chinese law has been reformed along the lines of Western law such as the German continental-legal tradition transmitted through Japan. Though this process of legal acculturation had come to a halt for 30 years, it was resumed three decades ago, allowing various foreign legal models to compete for influence. The PRC was founded in 1949. A five-year attempt to forge a new democracy between 1949 and 1954 culminated in the inception of the first constitution of the PRC. Alongside periods of legal nihilism, the Cultural Revolution was the paroxysm of the revolutionary era. Between the middle of the 1950s and the end of the 1970s, the legal profession was practically eradicated during politically violent mass campaigns, and the law gave way to political regulations or political-administrative internal documents.

As opposed to the alternately anarchical and rigid functioning of the Maoist period, from 1978 onwards the law has become a legitimate instrument of public action brandished by authorities. Multiple interactions and arrangements between the spheres of law and politics and between state and society have impelled civil society actors to either take initiatives in favor of political or economic liberalization or denounce them based on judicial rhetoric in the name of the law and its ongoing reform.

From the late 1980s up until recent years, Chinese legal reform has largely centered on efforts to enact new legislation, including administrative regulations, in various areas of the substantive law. Within the judiciary, reform has taken the form of developing a modern adversarial trial system while introducing some elementary rules of evidence. Such reforms were deemed necessary to resolve the civil and commercial disputes arising from China's transformation from a planned to a market economy. Since they do not want to democratize the political system, the Chinese authorities have been keen to push forward the rule by law through a professional and autonomous court system, albeit of socialist texture.

The 1980s economic reforms have engendered in their wake swelling inequalities and feelings of injustice. Where socioeconomic interests have become diversified and sometimes contentious, they can only be solved or at least alleviated by impartial institutions located outside government. Initiating legal changes, the Chinese authorities have unleashed new forces and new demands in society. Chinese citizens have been more aware of their rights, demanding their government and the courts to guarantee them. One of the best indicators of this profound evolution has been the steady increase in litigation. In 2001, nearly 6 million cases were handled by Chinese courts, as opposed to 4.5 million in 1995 and less than 2 million in 1987. Many cases have taken the shape of “economic disputes” and administrative cases, namely legal procedures against the government have become commonplace. Such legal modernization has been part of a more ambitious reform that pursued dramatic domestic objectives in order to perpetuate the rule of the Chinese Communist Party (CCP).

In 1989, the Administrative Litigation Law formally introduced administrative litigation into China's legal system. Many laws and regulations had been first drafted at the request of or for foreign investors, and later on extended to every legal entity or individual. However, the company law of 1994 and the contract law of 1999 apply only to foreign enterprises or individuals without bearing on Chinese entities or individuals.

In 1999, immediately after the CCP decided at the 15th Party Congress to “promote judicial reform,” the Supreme People's Court (SPC) announced a five-year plan to build a “fair, open, highly effective, honest and well-functioning” judicial system. “Judicial fairness” was highlighted as the cornerstone of the judicial reform.

China's application and accession to the World Trade Organization (WTO) in December 2001 have speeded up the process of unification and standardization of Chinese laws, as the PRC authorities were compelled to translate many multilateral commitments made to the WTO into their own legal texts. This exogenous influence resonated in economic and civil law as opposed to criminal, administrative, or constitutional law. These two large areas of the Chinese legal system have remained discrepant due to the instrumentalist approach to law favored by the CCP leadership.

Judicial reform acquired a new momentum during the 16th Congress of the CCP in November 2002. General Secretary Jiang Zemin's report to the congress stressed that the constitution is the highest law of the land: “No organization or individual enjoys any privilege above the Constitution and laws,” laying emphasis on procedural justice. This appears to put the Communist Party itself under the constitution, at least theoretically.

In 2000, a new law on legislation ventured to establish clearer hierarchy of legal norms in the country. The National People's Congress (NPC), a puppet legislature, has rarely revoked illegal regulations promulgated in contradiction with national rules by provinces, municipalities, or counties. Hence, many intricate and almost unsolvable legal situations and disputes have arisen, where contradictory legal principles or administrative rules compete with each other, a situation that helps the local authorities protect themselves with specific regulations that the center has not been able to scrap or may not even be aware of.

In highly mobile and populous societies, officials face difficulties retaliating against particular individuals. Still, even in Shanghai, a prosperous, somewhat cosmopolitan city, where individuals are less likely to be fearful of suing government officials, government officials and CCP members interfere in administrative litigation, thus facilitating judicial corruption. This local protectionism has been less prevalent in Shanghai, since the city has enjoyed better judges and clearer rules. Owing to its prosperity, the city managed to recruit better-qualified officials and provide them with law enforcement training and advice. The city's prosperity makes the government less susceptible to any particular will of investors.

Even though the Chinese constitution recognizes the independence of the courts in their judicial activities, the one-party system has so far negated the emergence of an independent apparatus to control the bureaucracy.

Protectionism has a deep-rooted historical background in China. In today's PRC it is the result of two main factors: an institutional pattern deprived of checks and balances in the form of an independent control apparatus, and an economic strategy based on unequal development of various regions of the country. Many provinces, and particularly the less competitive regions, tend to implement only the national legal and administrative rules that do not jeopardize their own interests, and to decree their own “domestic regulations.” Article 126 of the PRC Constitution states: “The people's courts exercise judicial power independently, in accordance with the provisions of the law, and are not subject to interference by any administrative organ, public organization or individual.” (Cabestan, 2005, p. 64) Regrettably, this constitutional provision has been nothing but lip service.

Privatization, constitutionalization, and politicization interact so that most cases concern private disputes initially devoid of any political dimension but end up being endowed with one. The right to equality (Article 23) has been frequently invoked in direct line with public action imbued with this principle. Two fellow graduates of the Institute of Law of the University of Sichuan, for instance, pleaded in two cases of discrimination in hiring practices in a restaurant in the city of Chengdu and in the local branch of the People's Bank of China. Regardless of numerous appeals and lengthy, sophisticated pleas by their professor, they were unable to obtain the application of the constitution as the text of reference to resolve the conflict.

During the SARS crisis in the spring of 2003, the demand of citizens that their fundamental right to information be respected was taken up explicitly in the media and by intellectual circles, and then brandished as a violent criticism of the government. In fact, the minister of health and the mayor of Beijing “resigned” and then were excluded from the party.

In yet another case in September 2003, after having appealed an initial ruling against him for having brought into China a book deemed to be subversive by a Chinese customs officer, a lawyer in Beijing managed to have the judgment overturned and the book returned to him. The lawyer claimed that the document specifying which works were forbidden was an internal one (and thus inaccessible to the public at large); it had been drawn up without the previous consent of the State Council or of the higher authorities of the customs administration, leaving free rein to arbitrarily behaving officials. This was the first case of a Chinese citizen to have obtained that application of a directive internal to an administration but carrying political ramifications be declared null and void by the Supreme Court in Beijing. The lawyer Zhu Yuntao, himself a member of the Communist Party, defended the author of the book that had been confiscated, explaining that because of his functions and his status he could not in any case seek to libel the institutions of the party. He asserted that respect for the rule of law and the struggle against bureaucratic arbitrariness should be considered a constitutional prerequisite and one of the party's tasks. Such situations are by far signs of a radical break, both with Maoism and with the judicial conservatism of the 1980s even if they do not yet portend radical political change emanating from the secondary effects of growing rationalization.

Legal practice and justice are strongly contingent upon the political, economic, social, and cultural environment where they develop. An independent judiciary and rule by law have been hobbled by the lack of financial and human resources; still a developing country, China can only allocate limited financial and human resources to modernize its legal system. Corruption is yet another salient problem: In China, as in many nations in transition, the rolling back of the state has more often favored new spheres of uncontrolled power and social inequalities without a safety net, rather than new areas of self-restrained freedoms and well-accepted responsibilities. The might of the strong or the rich, the growing corruption of party and government cadres, and the venality of administrative positions are some of the most serious problems that the PRC regime faces today.

Overhauling China's trial system has not been sufficient to enable the Chinese courts to fulfill their functions and pursue justice. There is a new awareness among many within the Chinese legal community that all the reform measures will mean nothing without judicial independence, namely an institutional reform to reset the status of the courts and their relationship with other branches of government. Lamentably, the rule by law in China has been interpreted and guaranteed with respect to political, bureaucratic, and economic powers of the parties involved rather than according to principles of law or equity. Notwithstanding, the gap between traditional Chinese and Western legal values and norms has narrowed while Chinese law has been modernized in line with demands for impartial justice within Chinese society. New principles and norms irrigate the Chinese legal system, allowing first business organizations and people and then less influential or more controlled segments of society (workers, peasants, minorities, journalists) to enjoy legal rights. The judiciary has been brought to the center stage as an arbiter between private citizens and the government as a guardian of citizens' rights against government encroachment, and the role of the courts has been constantly reassessed. More and more, many of China's lower courts have taken innovative measures to challenge traditional thinking and break ideological taboos, casting new light upon the relationship between the individual and the government.

The UN Security Council: Reforming a Perplexed Peacekeeper

Article 1(1) of the UN Charter states the primary purpose of the international organization: “To maintain international peace and security, and to that end: to take effective collective measures for the prevention and removal of threats to the peace, and for the suppression of acts of aggression or other breaches of the peace” (information in this section is drawn from Franck, 2006; Yoo, 2006; Schlichtmann, 1999; Imber, 2006; Trachsler, 2010; Weiss and Young, 2005). After 60 million deaths in World War II, states well understood the necessity for collective security and sought to forge parliamentary, executive, and judicial institutions that would ensure against such global catastrophes. The charter devised a new institutional process by which “to save succeeding generations from the scourge of war.” In 1945, bearing in mind Panzer Divisions rolling across Poland, it was rational to assume that threats to peace would probably take the form of one state's armies massed on the borders of another as in the past, and that aggression would consist of armies marching across state borders.

The mandate of the UN Security Council (UNSC) contoured by the charter to maintain international peace and security did not subsume human rights issues. Soon after the charter had come into effect, conventional military action ceased to be the principal mode in which threats to peace tended to arise. The shift to endemic and brutal civil wars, egregious violations of a growing canon of human rights, and clandestine terrorism directed at civilians has rendered those systemic norms meant to address threats to the peace obsolete. These new kinds of threats to the peace and acts of aggression are not those the charter's drafters had in mind when they formulated the United Nations' central mission of saving populations from the scourge of war. During decolonization, human rights bore both on human conscience and UN operations. Throughout the world public opinion was in unison that egregious violations of human rights could not be allowed to stand behind a facade of state sovereignty. The charter had not anticipated this shift in priorities, because civil wars and genocides did not stem from aggressive states but were rather brought about by terrorists and factional militias—entities not addressed by a charter fashioned to deal with state-to-state provocations.

The UN Charter admits no exception when ruling in favor of state sovereignty and against the use of force by nations, so preventing humanitarian disasters or rooting out terrorist organizations finds no explicit approval in the text of the UN Charter. The charter sets up a Security Council that has the authority to order nations to use force “as may be necessary to maintain or restore international peace and security.” Article 51 reaffirms that when a nation is attacked, it may use force to defend itself. Since the end of World War II, the majority of casualties have been due to intrastate rather than interstate wars; there have been no global, multistate conflicts, no great power conflicts, and no wars in Western Europe or North America—conflicts have become more localized.

The end of the Cold War marked a tipping point in the world's political, social, and economic makeup. Efforts led by the United Nations in general and the Security Council in particular resulted in a new emphasis on democracy, humanitarian needs, and human rights. Many countries felt the need to reexamine the structure of the United Nations as a whole in light of the post-Cold War world, and mainly the structure of the Security Council—the one body having the resources and power to drastically affect the world community and international relations. Since the Security Council has the final decision as to what constitutes a threat to or breach of peace it can decide that violating human rights does pose such a threat. Progressively, the council has been willing to identify human rights violations as such and react accordingly, especially in grave circumstances. The UN Charter rules against intervention except in self-defense, so without the permission of the Security Council outside efforts to stop civil wars or prevent humanitarian disasters are illegal.

The United Nations has no sovereignty of its own; any collective action rests on member states. The Economic and Social Council (ECOSOC), for example, which has been given far-reaching responsibilities by the UN Charter, cannot work without the member states' power and consent. The UN Security Council's most powerful institute is the pentarchy, since its consensual approval is required for enforcement. Currently, the council includes the Permanent Five (P5) members—the United States, the United Kingdom, France, Russia, and China—and 10 member states that are elected for two years each. The temporary seats are allocated according to the United Nations' five regional groupings: three African states, two countries from Asia, Latin America, and the Western European and Others Group, and one state from Eastern Europe. Although the United Nations has informally adopted numerous administrative changes, the only change to its written charter was a one-off enlargement of the Security Council from 11 to 15 by adding four nonpermanent seats in 1965. Since the last expansion of the Security Council, the number of UN member states has increased from 113 to 192 today.

The composition of the permanent members reflected the political environment after the Second World War; it was never planned to be a law unto itself above the law. Only the number of five was meant to be permanent because it is important for the functioning of the consensus principle within the pentarchy, making it nearly impossible for them to war among themselves. The present composition of the five permanent members of the Security Council does not cohere with stipulations in the UN Charter in terms of equal rights and equitable geographical representation.

The rules vest decision making in a small minority of member states, partly elected and partly self-selected. The charter reinforces the point by requiring that members “agree to carry out the decisions of the Security Council in accordance with the present Charter.” Chapter VII of the charter empowers the Security Council to determine just which acts do and do not constitute a threat to international peace and security. The provisions of the veto in Article 27 authorize any permanent member to deny a violation. Since one member can overrule the views of the other 14, each P5 member can command or command against acts of violence, including one's own and those of one's allies.

Criticism revolves around overrepresentation of Europe versus flagrant nonrepresentation of the South. Considering that the veto prerogative lost some of its glare during the Cold War and became a tool of power politics, the need to remedy this imbalance looms large.

Reform of the Security Council has been a bone of contention for nearly 20 years. Several regional powers demand that the composition of the council should better reflect their economic and political clout as well as their financial and personnel contributions to the United Nations. Aside from the pentarchy, reminiscent of a bygone constellation of powers, there are additional factors that call for Security Council reform. After the end of the Cold War, veto has been used scarcely, but resolutions passed under Chapter VII of the charter (referred to as the “teeth” of the charter that enable military enforcement) have been on the rise. The principle of noninterfering in domestic affairs of states has given way to “Responsibility to Protect.” Reform proponents endeavor to enhance the legitimacy of the council for the sake of more efficient decision making, more realistic mandates, and more determined implementation of its resolutions. Coming to grips with the lack of transparency in the Security Council proceedings has also moved to the front burner.

Discussions have been dragging on since 1992 without accomplishing any decisive breakthrough because proposals are not only incongruous but sometimes mutually exclusive. There have been three main blueprints for reform on the table: that of the Group of Four (G4) made up of Brazil, Germany, India and Japan; a second one of the group Uniting for Consensus (UfC) that subsumes Italy, Pakistan, Spain, Argentina, Canada, Mexico, and others; the African Union (AU) with its 53 member states tabled the third proposal. Proposals vary as to the number of prospective permanent and nonpermanent seats and their occupation, as there are many contenders for these seats. Restructuring the Security Council implies accommodating the charter as well. This would not only require a two-thirds majority of 128 states in the General Assembly but also a ratification of the changes by two-thirds of the members, with assent from all five permanent Security Council members (Article 108 of the charter). The P5 pay lip service to the notion of a moderate expansion of the council but are not interested in any rapid change to the status quo.

From its inception in 1945 the veto right, the prerogative of the P5, was controversial, but the great powers forced that precondition for them to participate in any system of collective security to begin with. The AU in particular is adamant that future permanent members of the Security Council be given equal status with the P5. However, enlargement of the council may turn out to be a double-edged sword, as apart from augmenting its legitimacy it may also obstruct its decision-making ability and efficiency. The ECOSOC, which was expanded from 18 to 54 seats, represented a negative precedent.

The call for a dramatic change in the Security Council has been placed on the back burner since 9/11. “We have reached a fork in the road,” Secretary-General Kofi Annan told the General Assembly in September 2003, referring to a pressing need to choose between reforms versus irrelevance. Shortly thereafter, he appointed the High-Level Panel on Threats, Challenges, and Change (HLP) comprising 16 experts, including four former prime ministers, attempting to reach consensus.

The HLP report of December 2004 was adapted by Annan to produce his own reform agenda, In Larger Freedom (ILF), published in March 2005. He suggested enlarging the Security Council by an extended and more equitable regional representation while also confronting the charter's inadequacies with respect to self-defense, terrorism, domestic human rights abuses, and various threats without borders such as HIV/AIDS and other pandemic diseases. Aware of the need to balance simple representational arguments with efficiency and effectiveness, Annan stated: “Those that contribute most to the organization financially, militarily, and diplomatically should participate more in Council decision-making.”

Most of HLP's recommendations remained a dead letter. Over the last decade, rhetorical fireworks have not culminated in amendments to the charter but have, nevertheless, been conducive to a more permissive environment that facilitates pragmatic modifications in working methods and improved on the council's democratic accountability. Perhaps amending the charter is still impossible, but overall the system of collective security has proven itself quite tractable in practice. When there is willingness to make the charter work in new circumstances, the dead hand of the literal text has not always barred the way to transformative change via apposite reinterpretation of the existent charter. It need not preclude even more radical and urgent reform of the system henceforth.

Germany: A Nonimmigration Nation, Rife with Immigrants—Article 16 of the Basic Law in the Limelight

The 1948 Universal Declaration of Human Rights (UDHR) enshrines in Article 14(1) “the right to seek and to enjoy in other countries asylum from persecution.” The principle of nonrefoulement codified in the 1951 Geneva Convention on Refugees that prohibits receiving states from expelling bona fide refugees to states in which they face persecution has since matured into binding international law. The right of asylum is the right of states to grant asylum, not the right of individuals to be granted one (information in this section is drawn from Devine, 1993; Joppke, 1997; Hansen and Koehler, 2005; Martin, 1994; Green, 2001; Hellmann and others, 2005). Generosity vis-à-vis would-be immigrants tends to decouple the state from its people, tying it instead to unpopular, somewhat elitist principles of humanitarianism. States facing immigrants and asylum seekers endeavor to reconcile internal pressures with commitment to universal principles of human rights.

France and Germany are two of the most important migrant-receiving countries in Europe, and along with the United Kingdom have among the largest ethnic minority populations in Europe. Germany's leaders have stated that “the Federal Republic of Germany is not, nor shall it become a country of immigration.” The majority of politicians and Germans (mainly elites) still cling to this idea.

For many years, German exceptionalism had a restrictive and widely criticized citizenship and naturalization regime based on blood affiliation (jus sanguinis), a derivative of völkisch nationalism. On the other hand, its constitutional law strongly protects the right of asylum and fundamental human rights of noncitizens independently of citizenship. The progenitors of the basic law, many of whom were exiled during the Nazi régime, conceived of an asylum law that went far beyond existing international law as a conscious act of redemption and atonement.

The constitutional provision in Article 116 hinged upon Germany's existing citizenship law (the RuStAG) of July 22,1913, stipulating in keeping with that era's ethnocultural nationalism that German citizenship could only be inherited by descent (the principle of jus sanguinis). When the West German basic law was drafted (1948), Article 116 defined as German (but not West German) all those of German cultural and ethnic descent who had settled within Germany's boundaries as of December 31, 1937. The expansive provision of cultural descent covered all the expellees (Vertriebene—Germans who had either fled their homes in parts of Central and Eastern Europe or had been expelled following World War II), even if they did not actually hold formal German nationality at the time. Moreover, because Article 116 included the territory of the GDR (German Democratic Republic, namely East Germany), all its citizens were automatically German citizens, too. By 1950, 8 million expellees had already settled in the FRG (Federal Republic of Germany—West Germany), but after 1950, with the Cold War taking hold in Europe, the focus of migration to West Germany shifted to East Germans fleeing the GDR. Between 1949 and 1961, 2.5 to 3 million GDR residents had settled in the FRG and integrated rather quickly into a hungry-for-labor economy.

The experience of Nazi tyranny, evinced most brutally in the Holocaust, delegitimized völkisch conceptions of belonging and Aryan ideals, institutionalizing instead universal human rights (Hansen and Koehler, 2005). In an attempt to atone for some of the horrors of the Nazi era, Germany's 1949 Basic Law or constitution sought to create a safe haven for those around the world suffering from totalitarian oppression. Article 16 stated that “persons persecuted on political grounds shall enjoy the right of asylum.” There were no numerical limits or quotas on those obtaining asylum in Germany. Furthermore, because asylum had been a constitutionally guaranteed right, applicants were entitled to public assistance and accommodations until their applications were resolved. Parallel inflow of ethnic Germans (who were granted automatic citizenship according to Article 116 of the Basic Law) and asylum seekers created insidious distinctions.

Most foreigners living in Germany have been a legacy of failed guest-worker policies from the 1950s and 1960s. Those guest workers had sustained Germany's economic miracle, but they did not follow the plan to leave their manufacturing, mining, and construction jobs to make room for fresh temporary workers. It suited their employers to keep them, and it suited the foreign workers to stay and to bring their families to Germany. The Aliens Law of 1965 was set up to regulate the status of guest workers in (West) Germany, looking upon foreigners as economic commodities at the discretion of “German state interests.” Aliens were made equal to German citizens in crucial respects by assertive federal courts that have construed aliens' rights analogously to the rights of Germans, arguing that acquired social and economic ties entailed the right of permanent stay, including access to wide-ranging social benefits, pursuant to the German welfare state precepts embedded in the constitution. The massive settlement of probationary migrant workers in Germany transformed a narrow labor market policy into a ponderous immigration phenomenon.

From 1913 to 1993, only those of German descent had a right to German citizenship. The arrival of guest workers was negotiated outside the framework of legislation; since there was no policy on naturalization, power over it rested with the Länder (the 16 federal subdivisions of The State of Germany). In 1977, the federal minister of the interior issued “guidelines” on naturalization, which were nonenforceable instructions to the Länder, Initially, there was great variation among the Länder in the treatment of asylum seekers. The southern Länder of Bavaria and Baden-Württemberg, conservative but also vulnerable to south-north migrations, spearheaded measures of deterrence such as herding asylum seekers in camps, providing in-kind benefits only, imposing work bans, and being quicker to deport rejected asylum applicants. The northern Länder of Lower Saxony and North Rhine-Westphalia and the city-states of Bremen and Hamburg, liberal but also more insulated, originally shied away from such negative measures. Soon enough, though, the intra-German north-south pull of asylum seekers forced the gentler north into a “deterrence competition” that eventually flattened such differences.

An influential early 1970s decision of the German constitutional court required candidates for naturalization to renounce all other citizenships. In addition, naturalization had to be in the interest of Germany, not the migrant. Within a year of unification, the scale tipped toward relieving requirements for naturalization; although modest by international standards, reform reflected a major ontological shift for Germany. Before 1993, no one except members of the ethnic German community had a right to acquire German nationality, and all naturalizations were discretionary. Henceforth, naturalization for those born and educated in the country and those with substantial periods of residence there became a legally enforceable entitlement. Aliens resident in Germany between the ages 16 and 23, for example, have had the right to naturalize if they fulfilled the following conditions: renunciation of previous citizenship; normal residence in the Federal Republic for at least eight years; completion of six years' full-time education, at least four of them at the secondary level; and no criminal convictions. Those living in Germany for 15 years had an entitlement to naturalize if they renounced their previous citizenship, had not been convicted of a criminal offense, and were able to support themselves without claiming unemployment benefits or income support.

Since the mid-1990s, Germany changed its position and blocked further integration, so policies concerning asylum and refugees became increasingly restrictive. Human rights considerations began to give way to economic arguments, justifying the more restrictive policies. Following the Conventions of Schengen (June 14, 1985) and Dublin (June 19, 1990), security considerations have become more prevalent. Nevertheless, neither the Schengen Agreement nor the Dublin Convention could have solved the German “asylum problem” of asylum seekers thronging the country. As the Cold War phased out, frontiers to Eastern Europe had opened, and far more refugees and asylum seekers ventured into European Community (EC) territory and especially German territory; the Schengen arrangements reframed the matter as a common European interest. Changing the Basic Law was necessary in order to meet European requirements, so any consequent change to the liberal asylum law in Germany was now no longer a failure of German politics or the breaking of a taboo, but rather a consequence of decisions at the European level. The liberal-conservative government fell into line with intergovernmental rules, using a tactical maneuver to portray the change as a real obligation to the European Union (EU). In effect, Germany restricted its asylum law to an extent that was by no means necessary on the basis of the EU's soft law rules.

The Amsterdam Treaty (May 1997) achieved freedom of movement and introduced an Area of Freedom, Security, and Justice as a new objective for the EU. European refugee and asylum policy was reduced to compensatory measures to safeguard internal security in a border-free Europe, restrictive of the lowest common denominator.

Following a landslide victory in 1998, Gerhard's Schröder's SPD/Green coalition came to power with a comfortable majority in both chambers. On January 14, 1999, the coalition presented a radical overhaul of German citizenship fully allowing for dual citizenship. The principal provisions lessened the residence period to qualify for naturalization from 15 to 8 years, 5 years for children, 3 years for spouses, and jus solis for children of foreign parents who had been born in Germany or who had immigrated before the age of 14. All major newspapers, churches, and unions supported the reform.

Despite favorable ambiance, the reform has never come to fruition. At first, the opposition switched venues: Rather than fighting the government in parliament where they were a minority in both houses, they took the debate to the streets. It failed to force a plebiscite but managed to launch a signature campaign. Second, in light of the impending elections in Hesse, one of the 16 Länders composing Germany, the opposition broadened the debate with the goal of boosting the chances of the Christian Democratic Union (CDU) candidate Koch against the Red-Green government. Third, the opposition reframed the discourse by depicting themselves as integration aficionados rather than its adversaries, proclaiming that foreign cocitizens enrich German society, so integration was not only a necessity, but also a desired political opportunity. Armed with such a positive conception of integration, the opposition portrayed dual citizenship as inimical to it, namely divisive and creating precisely the “segregated communities” that would endanger a culture of “tolerance and togetherness.”

The EU played a role in favor of the reform adversaries the same way it had happened in 1993, impeding further leniency with regard to naturalization. In the interplay between German policy and structures of European governance, the Maastricht Treaty (1992) empowered European regions such as the Länder in Germany. Later on in 1999, the Länder used the constitutional powers they had gained by that empowerment to vigorously defend their particular interest, since they were the ones having to accommodate asylum seekers. The Länder compelled the German delegation to reject more integrationist proposals. As the relative distribution of asylum seekers in Europe had changed to the benefit of Germany, the number of asylum seekers had dropped rapidly from a peak of 468,200 in 1992 to 104,400 in 1997, after which the Länder were unwilling to dilute national sovereignty to an extent that could enable European decision makers to reverse that trend. Helmut Kohl and the federal government finally yielded, upholding the antidual nationality, anti-integrationist stance.

Comparing Defense, Police, and Judiciary across Nations

This chapter scrutinizes changes to core activities and provisions of the nation-state in the Weberian sense. Those domains used to be highly centralized and monopolistic; willy-nilly, that is no longer the case.

Exogenous changes and institutional change underlie the crisis reform literature. When the dissonance with the transformed environment is too great and the crisis erodes the legitimacy of the policy sector to a great extent, there is a chance to diverge from the usual incremental way, bringing about a reform after overcoming individual, organizational, and political stumbling blocks (Resodihardjo, 2009). France and Israel have reformed their armies, their erstwhile sacred cows. Nonsupportive military and financial trends as well as social changes watering down industrial democracies finally took their toll on France, gnawing at the legitimacy of its army and in particular at conscription. After years in which France refrained from going whole hog and rather shortened the service, French authorities retained compulsory military service but made it selective and differentiated. It was no longer about defending national territory but rather collaborating with international armed forces (NATO), which entailed a more professional and modular military model. Such shifts alongside an ardent political entrepreneur tipped the balance in favor of full professionalization of the army, paving the way to end conscription in 1996.

Like France (and many modern developed societies, for that matter), Israeli society has also been marked by rising individualism and fading appeal of traditional values and attitudes upon which the social acceptability of conscription has been based. Akin to France, Israel resorted to more selective recruiting as more and more groups and individuals have dodged conscription, so it has no longer been universal. An even more salient reform has taken place in the reserve army. The number of reserve-duty days has been curtailed dramatically by, among other things, lowering the maximum age for reserves since the cost of reserve duty has been shifted from the National Insurance Institute to the army. Rather than standing above the market or even competing with it, the IDF has been subjugated to the market. Casualties are no longer conceived as a necessary evil, and more and more parents, especially bereaved ones, are asking to be heard. In the same vein, the mass media has started to be more exacting and fierce with respect to the once tabooed, hushed-up security matters. Although the army becomes more professional in line with the French one, conscription has not been abrogated. Still, complying with the recommendations of the 2007 Brodet Committee, extensive privatization proceedings take place within the army; kitchen services, transportation, construction, and maintenance are outsourced or contracted out.

The case of the Federal Emergency Management Agency (FEMA) is above all a caveat against organizational schizophrenia or identity crisis. Preventing natural disasters altogether is beyond mankind's reach, but reducing vulnerabilities is possible. Emergency management can be an ungrateful task since successes are immanently nonevents while catastrophes and failures attract (mainly media) attention. The case of FEMA exemplifies how a charismatic and adept director can mobilize politicians' and public opinion in order to attract wherewithal to transform a befuddled, incompetent organization to a state-of-the-art emergency management agency. In the aftermath of 9/11, FEMA has been subsumed under the newly erected Department of Homeland Security (DHS). This turned out to be a lamented mistake as FEMA lost its identity, becoming just another cog amid a plethora of agencies, most of which were much more proficient at dealing with internal security matters. FEMA lost its relative independence, becoming a component of a convoluted structure, losing also its human resources as more and more staff left, exhausted by turf wars within the DHS, taking with them the accumulated knowledge, lore, and know-how and rendering the agency futile. The atrocities of Hurricane Katrina had been a wake-up call inculcating among people and politicians alike the notion that a specialized emergency management agency is crucial, since terrorist attacks are not the only danger to human lives. Lessons have been learned: A new law regulates emergency management, laying emphasis on citizens' participation while empowering them to take preventive measures to reduce risks and vulnerabilities within their own communities. Cooperation between different governmental levels (federal, state, and municipal) has been addressed and streamlined. FEMA has won back its identity and mission of emergency management, realizing it should stick with it since there are enough specialized, better-equipped security agencies to go around. A clear and delineated mission is a sine qua non of every effective organization, as resources should by nature be geared toward that mission.

From its outset in the end of the 1980s, the paradigm of “community policing” has been to the police what new public management (NPM) has been to public administration. In those heady days, community policing was the putative panacea for police maladies whether real or imaginary, arising from reconsidering police strategies and practices in the 1960s and 1970s. Community policing is regarded as a strategy for improving relations between the police and the public and enhance police effectiveness in preventing and controlling crime. The four elements of community policing are organization of community-based crime prevention, reorientation of patrol activities to emphasize nonemergency servicing, increased police accountability to local communities, and decentralization of command (Skolnick and Bayley, 1988). Community policing raises concerns about the implications of thoroughly integrating the police into the community.

In the cases mentioned above, this undertaking has been obfuscated even more due to the oppressive military past of the police. The vestiges of a bygone era in which the police force was the repressive arm of a tyrannical racial regime and one of its most emblematic features are not easily uprooted. In South Africa, civil society groups are considered suspicious and regarded as threatening security rather than contributing to it (Marks and others, 2009). Catchy slogans can be excogitated overnight, and new uniforms may be quickly matched with novel insignia, but forsaking altogether the so deeply ingrained military oppressive past is a tall order that entails much more than that. Inertia makes it very difficult for organizations to change. The nature of police work as well as entrenched normative schemas pertaining to discipline make it even harder for the obstinate police to change (Skolnick and Bayley, 1988).

In Colombia as well as in South Africa the police reform was subsumed under a much more comprehensive reform changing the essence of the whole regime. New constitutions were forged in 1991 and in 1994, in Colombia and in South Africa respectively, which attests to the profound, culturally embedded nature of the shift underpinning police reform: “Community policing means different things in different communities, and to make it work can involve reinventing government, not just the police department.” (Skogan and Hartnett, 1997, p. 4) Although both the army and the police exert force, by no means are they the same. Public security differs from national security in that it emphasizes protection of persons, property, and democratic political institutions from internal or external threats. National security in contrast emphasizes protection of the state and territorial integrity from other state actors as well as from transstate actors such as organized crime, terrorism, and the like. The target of the police is the community with which it has to collaborate while empowering it; the target of the army is the nation-state's enemies whom it has to annihilate (Bailey and Dammert, 2006). This is mainly critical in instances of democratic transition, since military forces are neither trained nor equipped to patrol city streets, village plazas, or country byways. Street demonstrations, for instance, are normal channels of participation in political life. Army units are typically less skillful in handling crowds than are police, and incidents of unnecessary violence are to be predicted, hampering democratic transition (Bailey and Valenzuela, 1997).

Demilitarization of Colombian police has been a tedious venture. In 1991, a civilian defense minister was appointed for the first time after 40 years, granting the police greater operational autonomy with respect to the army. Contrary to expectations, it did not wrest the police from the military line of command completely. Military chains of control are known to be very persistent, never mind the civilian head: “Army officers assigned to police duties may theoretically report to civilian superiors but the more likely reality is that the military chain of command will remain in effect to an important degree.” (Bailey and Valenzuela, 1997, p. 55) The Colombian police force has held onto its military traits, which account for its identity crisis.

During the 1990s, the police force was finally managed by police officers who were trained as police, by police, in police academies, and under the control of a chief of police who was also a police officer. Regardless of a civilian defense minister, police officers declined a new career path and clung to their military-style ranks. Midranking police officials opposed university training to specialize in community policing to cater to citizens better. Even the police's great reformer was none other than General Jose Serrano.

Change did come about even if it has been slower than some have anticipated; there is no magic bullet to undo people's hearts and souls and to eradicate entrenched norms. NPM directives were followed through in South Africa and Colombia, stressing management, targets, performance indicators, public participation, and accountability. Colombian citizens collaborate with police and cooperate among themselves in thousands of community fronts. Citizens' views have been solicited via surveys. The metamorphosis of Colombian police was couched in big words/concepts such as cultural transformation (De la Torre, 2008; Ruiz Vasquez, 2012). The once opprobrious police force has been reborn as the government institution exhibiting the largest increase in public confidence for the past decade, with the armed forces coming in a distant second (De la Torre, 2008; Llorente, 2006).

Years of colonialism, apartheid, and repression in South Africa have left in their wake an ethnically heterogeneous but a highly segregated population. The police force is the spitting image of the society in which it is embedded: Race and gender imbalances are striking; chiefly white men populate the higher ranks and certain elite units, while women, indigenous people, and ethnic minorities constitute the rank and file. Following years of oppression and deprivation, the hoi polloi lack the human capital to ascend the social ladder while the police force still strives to break away from its military past. The police are more than happy to engage the private sector as opposed to ordinary citizens, so community policing has become a dead letter. Most behavioral changes are perfunctory because management of labor relations remained autocratic. Even if South African police lag behind in espousing a civilian, accountable, service-oriented stance, human rights are upheld by a mosaic of bodies and organizations that stay on guard against human rights infringements, regulating and keeping an eye on the police. Many NPM-style measures did take hold such as performance indicators and privatization via partnerships and joint working agreements made with private security companies.

Ushering in private providers has also been an eminent feature of the prison and probation services in Britain. Following many reports by Her Majesty's chief inspector of prisons over the years depicting prisons (many of which were outmoded Victorian ones) as a disgrace to any civilized society (Flynn, 2007), market testing and compulsory competitive tendering have been introduced, the favorite solution of the Conservatives embraced by their successors. Private and not-for-profit providers have also been introduced into probation services, encroaching upon a preeminent Weberian attribute of the state; that is, a monopoly with respect to organized violence. Private is not necessarily better, and there is still much room for improvement in both private and public sectors. In South Africa 11 homeland forces have been consolidated, forming a single national police force, whereas in Britain the prison and probation services have been amalgamated under the aegis of a new executive agency, the National Offender Management Service (NOMS); the road to a seamless service working concertedly is very long and fraught with institutional and cognitive barriers.

Focusing on citizens as customers and partners whose “voice” is not only heard but often solicited is a salient feature of all three reforms vetted in this section. Performance indicators have been forged and targets have been set forth, so the public could gauge whether it got its money's worth. Those elaborate accountability mechanisms are yet another feature of NPM-inspired reforms. A major shift took place in Britain as the “law-abiding majority” is seen as clients of the prison and probation services. It is to their welfare and needs that those services now cater, while culprits are no longer treated as helpless victims of circumstances but rather as people who chose to do wrong and need to redeem themselves and to indemnify those they have hurt and the public at large. All those measuring, evaluating, and performance regimes entail much effort and resources, but they do bear fruit: This outward-facing service demonstrates, on a regular basis, the value of what it does vis-à-vis public expectations.

Judicial reform in China has also been embedded in an all-encompassing transformation, namely that of the socialist market economy since its opening up in 1978 and the civil and commercial disputes arising from it. Alongside resurrecting legal professions, and as Western norms and institutions have impinged more and more on China, Chinese citizens have appealed to the law in the name of equity and justice, and courts have increasingly managed to cater to them. That latent revolution is still incipient and inconsistent mainly because there is no hierarchy of legal norms in the country. In addition, the legal system is not independent, but rather highly politicized, and strongly replete with corruption and incompetence. The change taking place slowly and covertly attests to the flexible nature of law that facilitates reinterpretation in order to protect human and property rights from governmental arbitrariness without altering the legal framework or provisions.

The flexibility of the law is also evident in the case of the UN Security Council. Under new, emergent circumstances, the same charter that regarded state sovereignty as sacrosanct has been reinterpreted, justifying encroachment on sovereignty in cases of human rights violations. The environment that gave rise to the charter has changed substantially and created dissonance that had to be resolved in keeping with new conjunctures, since the charter has to accommodate the member states and not the other way around. Changing the composition of the Security Council proved to be much more grueling; permanent members have been reluctant to forfeit their stronghold or even share it with contending powers.

Proclaiming not to be an immigration nation, Germany had one of the most lenient laws regarding noncitizens in 1949, since the Nazi era atrocities still loomed large. Germany was for many years the promised land of bona fide and not so bona fide asylum seekers, guest workers, and ethnic German immigrants. The influx of immigrants pouring into Germany continued undisturbed; the matter was delicate and flammable in Germany, suiting other European countries just fine when their receptive ally prevented the problem from being in their own backyard. When internal European borders started fading away as the European Common Market evolved into a full-fledged European Union, it was no longer a German issue but rather a European one. Germany seized the opportunity to align itself with its neighbors and change policy and laws concerning asylum and immigration. Asylum seekers could no longer “shop around” in more than one EU nation because denied entry to one country meant a refusal by all.

The German case stands out as the antipode of the Chinese one. A democracy should be responsive to its citizens and attentive to supranational behests. Germany acted in line with dictates from above and catered to constituencies' outcries as immigrants were exhausting the country's wherewithal. China, despite internal and external pressures, is much slower to change, and alterations come about not wholeheartedly but rather as lip service paid to economic development. Adjustments are subtle, covert, often tacit, and slow; it is very much unlike the fanfare that accompanies legislation and policy in a democracy such as Germany.

If someone was mistaken to think that the loci of state powers are immune from changes taking place in service provision for quite some time now, it soon became evident that NPM strand-of-thought processes pervade even the traditional core of state activities such as national defense, public security, and jurisprudence. Some may lament the nation-state's untimely demise, whereas others may look upon those proceedings favorably, perceiving it as a welcome rebirth of a phoenix.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.103.204