CHAPTER 72

LEGAL AND POLICY ISSUES OF CENSORSHIP AND CONTENT FILTERING

Lee Tien, Seth Finkelstein, and Steven Lovaas

72.1 INTRODUCTION

72.1.1 Scope of This Chapter: Government Intervention

72.1.2 Whose Laws? Whose Standards?

72.1.3 Defining Objectionable Material: International Differences

72.2 U.S. CONTEXT: FIRST AMENDMENT RIGHTS

72.2.1 What Does the First Amendment Protect?

72.2.2 Basic First Amendment Principles

72.2.3 Limitations on Government Interference with Speech

72.2.4 Exceptions Where Speech Can Legally Be Limited

72.2.5 Legislation and Legislative Initiatives in the United States

72.2.6 Attempts to Control Access: Case Law

72.3 PARENTAL INVOLVEMENT/RESPONSIBILITY

72.4 SUMMARY

72.5 FURTHER READING

72.6 NOTES

72.1 INTRODUCTION

Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive, and impart information and ideas through any media and regardless of frontiers.1

One might think that the Internet will make this ringing proclamation a reality. Like no other technology, the Internet transcends national borders and eliminates barriers to the free flow of information. Governments, however, are trying to control speech on the Internet.

72.1.1 Scope of This Chapter: Government Intervention.

Many nations protect rights of free speech and free expression. The First Amendment to the U.S. Constitution provides that “Congress shall make no law … abridging the freedom of speech or of the press.” The Canadian Charter of Rights and Freedoms protects freedom of speech and press in article 2 as fundamental rights. While Great Britain has no formal constitution, the Magna Carta of 1215, the 1512 Privilege of Parliament Act, the 1689 Bill of Rights, and the 1911 Parliament Act all protect the freedom of speech.2

Despite such constitutional protections, speech is regulated everywhere. Article 1 of the Canadian Charter of Rights and Freedoms permits “demonstrably reasonable” limitations on freedom of expression. Great Britain and Canada have promulgated legislation regulating racially defamatory speech, group defamation, or speech that incites racial hatred. Although American law is far more protective of speech, some expression is deemed “unprotected” by the First Amendment, and even protected speech can be regulated in a variety of circumstances.

Accordingly, some governments have enacted laws prohibiting certain content on the Internet and have sought to prosecute users and service providers. Others have tried to control access by insisting on the installation of national “proxy servers” and requiring the blocking of targeted Web sites. Governments also have encouraged “self-regulation” intended to enlist service providers to control customer behavior.

72.1.2 Whose Laws? Whose Standards?

It is often thought that governments cannot successfully censor the Internet. After all, one nation's laws may be unenforceable elsewhere. But national laws can be applied to communication intermediaries like Internet service providers (ISPs), blocking access by people in that country.

Moreover, just as speech originating in one nation can reach a worldwide audience, one nation's laws can have powerful effects in other nations. As commentators have put it, “[t]he reach of the Internet multiplies both the number of laws and the number of jurisdictions applicable to speech transmitted on-line.”3 Indeed, “[b]ecause content posted on the Internet is instantaneously transmitted worldwide, Internet users and providers almost automatically face potential liability in any country to which the Internet is connected—not just the nation where the speech was created or where the author lives.”4

The most prominent example is a recent case in which a French court ordered the U.S.-based Yahoo to “take all necessary measures to dissuade and make impossible any access via Yahoo.com to the auction service for Nazi merchandise as well as to any other site or service that may be construed as an apology for Nazism or contesting the reality of Nazi crimes.”5 The French court held that “the simple act of displaying [Nazi artifacts] in France violates Article R645-1 of the Penal Code and therefore [is] a threat to internal public order.”6 It described the mere availability of such information to be “a connecting link with France, which renders our jurisdiction perfectly competent to rule in this matter.”7 After the French order was handed down, Yahoo filed suit in the United States seeking a judgment that it need not comply with the French order.8 The French parties moved to dismiss the U.S. case for lack of personal jurisdiction, but the U.S. district court denied the motion, allowing Yahoo to proceed.9 In 2004, however, the U.S. Appellate Court overturned the district court's decision, ruling that the district court judge lacked the authority to hear the case, but also noting that Yahoo should not expect both to benefit from the worldwide availability of its product and to avoid the possible consequences of that availability.

Resolving international jurisdiction in cyberspace is one of the most difficult and important issues for the Internet, but it is beyond the scope of this chapter. Moreover, the ways in which the laws of one nation may affect Internet speech in another extend beyond the topic of this chapter, state-backed censorship. Even when governments do not themselves target particular kinds of speech, their laws may give private parties legal rights against expression originating overseas, as in the area of defamation.10 Laws relating to commerce also may raise free speech concerns, as in the areas of intellectual property11 and privacy regulation. For instance, the 1995 European Union Data Protection Directive generally restricts all “processing” of “personal data,” giving data subjects legal rights quite different from those found in the United States.12

72.1.3 Defining Objectionable Material: International Differences.

For most countries, the major concerns stem from the increased availability of “objectionable” content over the Internet. This section focuses on how the meaning of “objectionable” differs from country to country.

72.1.3.1 Sex.

A common type of objectionable material is speech about sex. In the United States, a series of attempts have been made to regulate the availability of sexually oriented speech over the Internet, although with little success.13 Several Middle Eastern nations also block pornographic sites.14

In 1995, the Bavarian Justice Ministry informed CompuServe that senior company officials could face prison terms for violation of German antipornography laws. German police provided CompuServe a list of the discussion groups that contained potentially objectionable material.15 In February 1997, Germany indicted Felix Somm, head of CompuServe's German subsidiary, for failure to prevent the dissemination of illegal material on its online service; with this Germany became the first western democracy to prosecute an official of an online service for illegal Internet content.16 German prosecutors argued that because CompuServe had access to screening software, the company had the opportunity to block the offending material but failed to do so. Soon thereafter, Germany enacted its Information and Communications Services Act of 1997 (ICSA). The ICSA, however, has been sharply criticized for failing to resolve the uncertainties faced by ISPs.17

72.1.3.2 Hate.

Another major category of objectionable speech is what might loosely be called “hate speech.”18 American hate sites include sites promoting white supremacy, Nazi or neo-Nazi views, “skinheads,” and the Christian Identity Movement.19 Internet content that promotes Nazi views and Holocaust denial has attracted considerable international attention.

For instance, the German constitution, known as the Basic Law, incorporates freedom of expression as a fundamental individual right. But Article 5, the main provision concerning free speech, expressly provides that the government may limit an individual's expressive right if it conflicts with other people's rights, public order, or criminal laws. Thus, under Germany's general laws and the laws protecting youths, publishing or distributing neo-Nazi or Holocaust denial literature is a criminal offense.20

In February 1996, German prosecutors in Mannheim investigated several ISPs, including America Online, regarding alleged distribution of neo-Nazi material on the Internet in violation of anti-Nazi laws.21 More recently, German-born Frederick Toben, who uses the Web site of his Australian-based Adelaide Institute to advocate Holocaust denial, was found guilty of offending the memory of the dead.22 The lower court had ruled “that German law against inciting racial hatred could not be applied to content on a foreign website.”23 Germany's Federal Court of Justice, the Bundesgerichtshof, overturned the lower court ruling and found that German law applies to Internet content that originates outside the country's borders, so long as people inside of Germany can access that content.24 In a 2002 study, Harvard Law School professor Jonathan Zittrain found that Google was systematically excluding search results containing banned content in the French and German versions of its site and that this was apparently “because of pressure applied or perceived by the respective governments.”25

Canadian law directs courts to balance an individual's free speech rights with societal equality interests. Thus, Canadian courts have upheld hate speech convictions under laws that criminalize the willful promotion of hatred.26 The Canadian Supreme Court held the statute constitutional.27 The Court defined hate speech as including a requirement that “the accused ‘promote’ hatred against an identifiable group, indicating more than simple encouragement or advancement and rather required that the hate monger intend or foresee as substantially certain a direct and active stimulation of hatred against an identifiable group.”28

The Internet has raised new issues in Canada regarding hate speech. Bernard Klatt runs the Fairview Technology Center out of his home in British Columbia, and about a dozen neo-Nazi, white supremacist, and skinhead clients have used his server to publish material against immigration and homosexuality while celebrating Hitler's accomplishments and “Euro-Christianity.” The Canadian government has attempted to shut down Klatt, but he and his clients have insisted that they had not broken any laws.29

The Zündelsite, a multilingual archive and news service, may be the most prominent Holocaust denial Web site run by an individual. Its administrator, German Canadian Ernst Zündel, became one of the denial movement's martyrs when he was convicted in Canada for his Holocaust denial pamphlets.30 The Canadian Supreme Court later overturned his conviction.31 Zündel's U.S.-based Web page then became the target of an official inquiry by the Canadian Human Rights Commission.32 Zündel was deported from the United States to Canada in 2003. The Canadian government determined that he represented a risk to national security, and deported him to Germany, where he was sentenced in 2007 for the maximum term of five years on outstanding charges of incitement for Holocaust denial.33

72.1.3.3 Politics and Culture.

Various countries also regulate Internet content that generally threatens the government or political, social, and cultural values.

72.1.3.3.1 China.

At the end of 2006, there were approximately 137 million Internet users in China, an increase of 34 million over the previous year.34 The Chinese government, although eager to capitalize on the Internet, feels that the “Internet is a very real threat to the hold their dictatorial regime has over the country.”35 Accordingly, China severely restricts communication via the Internet, including much dissent and the free reporting of news. The so-called Measures for Managing Internet Information Services prohibit private Web sites from publishing “news” without prior approval from communist officials.36

A major obstacle for would-be Chinese Internet users is the registration process required to get online. One must pick an ISP and provide identification. One must also fill out a Police File Report Form that will be sent to the ISP, the local Public Security Bureau (PSB), and the provincial-level PSB Computer Security and Supervision Office. In addition, users must complete a Net Access Responsibility Agreement, pledging “not to use the Internet to threaten state security or reveal state secrets” or to read, reproduce, or transmit material that “endangers the state, obstructs public safety, or is obscene or pornographic.”37 The ISP application itself requests information such as employer's address, home address, and profession; home, office, cell phone, and pager numbers; as well as details about the computer, the modem type, and permit number, which is assigned by the PSB.38

In January 2000, the Chinese newspaper the People's Daily published new Internet regulations issued by China's State Secrecy Bureau. These laws “ban the release, discussion, or transfer” of “state secret information” on bulletin board systems, chat rooms, or Internet news groups. E-mail users were banned from sending or forwarding state secrets by e-mail. In addition, all Web sites are required to undergo a security check. A provision requiring that anyone using encryption technology register with the government was later rescinded.39

In May 2000, after the China Finance Information Network, a financial Web site, republished a Hong Kong newspaper about corruption by a provincial official, Chinese public security officials suspended the site for 15 days and fined the site 15,000 yuan (US $1,800).40 On August 3, 2000, state security officials forced a Chinese ISP to shut down the Xinwenming (New Culture Forum) site run by a group of dissidents in Shandong Province for posting “reactionary content.”41

The following month, China's State Council passed Internet laws that limited foreign investment and imposed strict surveillance requirements on Internet content by mandating that Internet content providers monitor both the material they publish and the people who access that material, including the times users log onto the Internet, their account numbers, their Internet addresses, and the phone numbers they are dialing in from.42

On November 7, 2000, the Ministry of Information Industry and the Information Office of the State Council released new laws banning commercial media organizations from setting up independent news sites; prohibiting commercial portals from carrying news items based on their own sources; and requiring China-based Web sites to obtain permission from the State Council Information Office before linking to overseas news sites or carrying news from overseas news media. The new Internet laws also restrict the content of online chat rooms and bulletin boards.43

In December 2000, the National People's Congress passed a new law to foster Internet safety. This law criminalizes several forms of online political activity, including using the Internet to “incite the overthrow of state power, topple the socialist system, or … destroy national unity,” promote “cults,” or support the independence of Taiwan.44

The Ministry of Public Security announced on February 27, 2001, that it released Internet filtering software called Internet Police 110 on February 26. The software comes in three versions for households, Internet cafés, and schools that can monitor Web traffic and delete or block messages from sources deemed “offensive.” According to a news report published in the China Securities News, it can send text or voice warnings to45 network administrators about any unauthorized Internet surfing.46

On March 20, 2001, official Chinese media reported that China is developing a surveillance system to monitor activities on the Internet, which will be similar to the data-recording “black box” installed in commercial airplanes. According to Hong Kong iMail, it will be able to record all communications through the Internet.47

In April 2001, the official Chinese news agency Xinhua reported that China would impose a three-month ban on the opening of new Internet cafés. This is part of “a major offensive against unchecked use of the Internet. Xinhua said authorities are to conduct a massive probe into existing Internet outlets, which it views as potential hotbeds of dissent and vice.”48

In addition, since January 2000, the Chinese government has been arresting individuals who use the Internet in a manner they deem dangerous, often for online political or religious activity. Qi Yanchen, sentenced to four years in prison on September 19, 2000, is the first Chinese convicted of subversion for material he wrote that was published on the Internet. Qi was officially charged for writing articles in the May 6, 1999, and May 17, 1999, U.S.-based Chinese dissident e-mail publication Dacankao (V.I.P. Reference).49 Guo Qinghai, a freelance writer, was arrested in September 2000 for “subverting state power.” Guo published articles on the Internet that discussed Qi Yanchen's case. He also posted, on overseas online bulletin boards, essays promoting political reforms in China. He was sentenced to four years in prison.50

Zhang Haitao, creator of the only China-based Web site on the outlawed Falun Gong, was charged with subversion on October 11, 2000, and is accused of establishing a site promoting Falun Gong and of posting an online petition urging followers to protest the government ban on the group.51 Zhang Ji, a college student in Heilongjiang Province, was charged on November 8, 2000, with “disseminating reactionary documents via the Internet.” Authorities say Zhang had e-mailed information to U.S. and Canada-based Web sites of the Falun Gong religious group as well as downloaded news about the group and shared it with others in China.52

Google recently has had a rocky relationship with the Chinese government, with its entire service being blocked several times for serving information critical of government policy. In January 2006, Google agreed to block a list of search terms that would be provided by the Chinese government and limit searches from within China to this partial set of results at google.cn. The independence of Taiwan and the 1989 Tiananmen Square massacre remain on the list of forbidden topics. Given the history of outright bans, Google seems to have decided that some content was better than none.53

72.1.3.3.2 The Middle East.

According to Human Rights Watch, the United Arab Emirates (UAE) is “the regional leader in advocating censorship of the Web through the use of high-tech means.”54 Dial-up users in the UAE do not access the Internet directly; they dial into a proxy server maintained by Etisalat, the state telecommunications company, in collaboration with a U.S. firm that is contracted to maintain and update the software. Although UAE officials have insisted that the proxy server's sole purpose is to block pornographic sites, at least one nonpornographic site, that of the Gay and Lesbian Arabic Society (www.glas.org) is blocked. When asked about the site, an Information and Culture Ministry official explained that there had been complaints about it.

Saudi Arabia bans publishing or even accessing various types of online expression, including “[a]nything contrary to the state or its system”; “[n]ews damaging to the Saudi Arabian armed forces”; “[a]nything damaging to the dignity of heads of states”; “[a]ny false information ascribed to state officials”; “[s]ubversive ideas”; and “[s]landerous or libelous material.”55

All 30 of the country's ISPs are linked to a ground-floor room at the Riyadh Internet entranceway, where all of the country's Web activity is stored in massive cache files and screened for offensive or sacrilegious material before it is released to individual users. The central servers are configured to block access to “sensitive” sites that might violate “the social, cultural, political, media, economic, and religious values of the Kingdom.”56

According to Human Rights Watch, although official Saudi explanations of Internet censorship focus on materials deemed offensive to conservative Muslim sensibilities, Saudi blocking apparently extends to political sites. In early 1999, the site of at least one exiled dissident group, the Committee against Corruption in Saudi Arabia (www.saudhouse.com), was reportedly blocked.

Yemen's telecommunications service provider, Teleyemen, told Human Rights Watch that there is a “general requirement … to limit information which is considered to be undesirable in terms of causing offenses against social, religious, or cultural standards” and that Teleyemen uses the Surfwatch censorware program in conjunction with a proxy server.

More recently, controversies have erupted when Middle Eastern Islamic groups or nations have protested content offensive to Islam that was hosted outside their own borders. In 2005, a Danish newspaper published caricatures of the prophet Muhammad, which many Muslims found to be offensive. The cartoonist, the newspaper, and the Danish people as a whole were targeted with boycotts and death threats. Imams in Denmark brought suit to have the government punish the newspaper but had no success in the courts. In 2007, Turkey briefly blocked all access to YouTube in protest of a video insulting the nation's founder, Mustafa Kemal Atatürk. Google, which had purchased YouTube earlier that year, quickly removed the video in question, and Turkey revoked the ban.57

72.1.3.3.3 Myanmar/Burma.

In 1988, a new military clique called the State Law and Order Restoration Council (SLORC) took power in the country formerly known as Burma, now known as Myanmar. They managed to achieve cease-fires with 15 armed organizations, but they refused to step down, asserting that only “the military can ensure national unity and solidarity,”58 even after the National League for Democracy's overwhelming victory in the 1990 election. In 1997, the SLORC was renamed the State Peace and Development Council (SPDC), but the membership remained largely unchanged.

SLORC/SPDC maintains power partly by controlling access to information. In January 2000, the Myanmar Post and Telecommunication (MPT), then the only ISP in Myanmar, passed a measure that forbids Internet users from posting political speech on the Web, such as writings “detrimental to the interests of the Union of Myanmar” or “directly or indirectly detrimental to the current policies and secret security affairs of the government of the Union of Burma.”59 Only the person granted an Internet account may use it, and the account holder is held responsible for all use of that account. Web pages may not be created without government permission, and violation of any regulation is subject to legal action.60 In 2005, Myanmar's only private ISP, Bagan Cybertech, was nationalized under the control of the MPT, and the government has entered into an agreement with U.S. Web content filtering company Fortinet in order to better control Web access from within Myanmar.61

The Burmese military has been effective at “warding off any Internet revolution by limiting the rights of its citizens.”62

72.2 U.S. CONTEXT: FIRST AMENDMENT RIGHTS.

The First Amendment of the U.S. Constitution strongly protects freedom of speech in the United States. In several ways, the right to speak is the most powerful of all U.S. constitutional rights. First Amendment scrutiny can be much stricter than in other areas of constitutional law. It is usually easier to get in front of a court when “speech” is at issue, and to win, because many technical legal requirements are relaxed.

Thus, although the right to speak is not constitutionally absolute, and may be restricted by government, several basic principles limit governmental authority to restrict freedom of speech, more than it is limited from restricting other activities. These principles have been articulated through a variety of legal doctrines.

72.2.1 What Does the First Amendment Protect?

Today, the First Amendment generally protects all forms of communicative or expressive activity63 from “state action” or governmental interference.64 Courts generally presume that linguistic acts or visual depictions are covered by the First Amendment but require litigants to show that nonlinguistic acts are speech unless the law has already so held.65

Nevertheless, the coverage of the First Amendment is quite broad. It includes not only the right to speak but also the right not to speak, or against “compelled” speech,66 the right to speak or associate anonymously,67 and the right to read or to receive information.68 Often activity that is not necessarily communicative is protected because it enables or facilitates expression. Highly discretionary city licensing of newsstands was found to be an invalid scheme of licensing that affected liberty of circulation.69 Money is not speech, but election campaign contributions and expenditures are protected as speech.70

In addition, the First Amendment requires that the government observe some degree of neutrality in its treatment of speakers71 and speech media;72 it cannot arbitrarily prefer one speaker or one medium to another.

72.2.2 Basic First Amendment Principles.

There is no consensus or accepted theory underlying U.S. protection of free speech. The different rationales for freedom of speech can be categorized as more or less “positive” or “negative.” Positive justifications identify speech as related to some special moral, social, or political value. For example, the “marketplace of ideas” theory is based on the notion that “the best test of truth is the power of the thought to get itself accepted in the competition of the market.”73 Courts also have defended the right to free expression as embodying the value of “speaker autonomy,”74 thus “putting the decision as to what views shall be voiced largely into the hands of each of us, in the hope that use of such freedom will ultimately produce a more capable citizenry and more perfect polity, and in the belief that no other approach would comport with the premise of individual dignity and choice upon which our political system rests.”75

Negative theories presume the value of speech and focus instead on the dangers of government regulation of speech. The main idea here is that speech is as vulnerable to threats of punishment as to punishment itself. “It is characteristic of freedoms of expression in general that they are vulnerable to gravely damaging yet barely visible encroachments.”76 This fear of “chilling effects” on speech sometimes is associated with the possibility that juries may not treat unpopular speakers or unorthodox ideas fairly, but it also reflects a concern that individuals may censor themselves. Such self-censorship can harm public debate while being almost “invisible” to the courts because it occurs through many small individual choices.77

72.2.2.1 Harm/Causation.

Perhaps the most important First Amendment principle is that the government has the burden to show that the speech it wishes to regulate truly causes significant harm. This principle was not accepted for many years; in the early days of First Amendment law, speech could be punished as an “attempt” if the natural and reasonable tendency of what was said would be to bring about a forbidden effect. Thus, for instance, individuals who had mailed circulars to draftees arguing that conscription was unconstitutional and urging them to assert their rights were convicted for conspiring to attempt to obstruct the draft.78

Today, courts are far more skeptical about government claims of harm. If speech could be restricted merely because the mathematical expectation of harm were significant, speech would be easier to restrict as it were more widely disseminated. Such a result is inconsistent with the First Amendment's protection of open, public discourse. As a classic opinion stated, “no danger flowing from speech can be deemed clear and present, unless the incidence of the evil apprehended is so imminent that it may befall before there is opportunity for full discussion. If there be time to expose the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”79

The concern here is not only for the speech itself; it includes a concern that the speaker can properly be held responsible for harm. The modern approach is exemplified by a case where a speaker was convicted for advocating violence. The Supreme Court reversed the conviction because the law did not require the government to show both that the speaker intended to produce “imminent lawless action” and that the speech was “likely to incite or produce such action.”80 The requirement of intent helps protect speakers from being held responsible for the acts of others. As one court put it, “Much speech is dangerous. Chemists whose work might help someone build a bomb, political theorists whose papers might start political movements that lead to riots, speakers whose ideas attract violent protesters, all these and more leave loss in their wake.”81 Without careful attention to harm and causation, the right to speak would mean little.82

72.2.2.2 Neutrality.

Not all reasons that government might use to restrict speech are valid. Thus, a second basic First Amendment principle is governmental neutrality as to speech content. Government is likely to have illegitimate reasons for wishing to restrict speech, such as the desire to suppress political criticism, but it may be able to disguise such hostility by claiming that less selfish interests, such as “public order” or the civility of discourse, are truly at stake.83 Also, because public officials may have a strong self-interest in stifling dissent, they are likely to ignore or undervalue the social interest in free speech. Such motivations may lead to regulation that skews or distorts public debate. In general, then, government may not regulate speech based on its substantive content or the message it conveys.84

72.2.2.3 Precision.

Third, all speech regulation must be precise. “Because First Amendment freedoms need breathing space to survive, government may regulate in the area only with narrow specificity.”85 Much of the First Amendment's force stems from this principle. Imprecise regulation not only chills speech but also gives officials too much discretion to discriminate on the basis of content. Without clearly stated standards, courts cannot easily ferret out content-based discrimination.86 Insisting on precision also allows courts to avoid assessing the importance of the legislature's goals; they need only say that the means are too blunt.

72.2.2.4 Background Doubt-Resolution Principles.

Adding to the strength of First Amendment protection for speech are several principles for resolving uncertainty. Speech regulation generally requires knowledge or intent on the speaker's part, which protects bookstores and others that “carry” others' speech.87 Moreover, the factual burden of proof is usually on the person or entity seeking to restrict speech.88 Finally, speech that in itself might be disfavored receives “strategic” protection in order to protect undue chill to public discourse; for instance, while false statements of fact may be considered of little First Amendment value, permitting the punishment of mere falsehood would overly burden speakers.89 Put simply, doubts generally are called in favor of speech.

72.2.3 Limitations on Government Interference with Speech

72.2.3.1 Distinction between Substantive Tests and Procedural Safe-guards: Example of Prior Restraints.

A critical distinction in First Amendment (and other) law is between substantive and procedural review. The difference is best explained by reference to “prior restraints,” which are presumed unconstitutional.

Most people are familiar with prior restraints as judicial orders preventing speech or publication. In such cases, government “carries a heavy burden of showing justification for the imposition of such a restraint.”90 For instance, the Supreme Court required the government to demonstrate that publication of the Pentagon Papers “will surely result in direct, immediate, and irreparable damage to our Nation or its people.”91 Judicial “gag orders” on press publication are subject to similarly searching scrutiny.92 In these cases, courts are concerned with whether the particular restraint on speech is sufficiently justified.

A second line of prior restraint doctrine illustrates procedural scrutiny. American hostility to prior restraints is a reaction to English press licensing systems under which nothing could be printed without the prior approval of government or church authority. The fear is that discretionary permit schemes make freedom of speech “contingent upon the uncontrolled will of an official.”93 Thus, the law authorizing such a licensing scheme must provide “narrowly drawn, reasonable and definite standards for the [administering] officials to follow.”94 Thus, licensing schemes, such as for screening obscene movies, are constitutional only if accompanied by procedural safeguards necessary “to obviate the dangers of a censorship system.”95 There must be a “prompt final judicial decision” reviewing any “interim and possibly erroneous denial of a license.” Moreover, “because only a judicial determination in an adversary proceeding ensures the necessary sensitivity to freedom of expression,” the censor must, “within a specified brief period, either issue a license or go to court.”96 In such cases, the particular restraint is relatively unimportant: The issue is whether the statute itself contains sufficient procedural safeguards.

72.2.3.2 Levels of Substantive Scrutiny and the Issue of Content Neutrality.

First Amendment cases typically feature three levels of substantive scrutiny, turning mainly on how the statute regulates the meaning of speech: by subject, by viewpoint, or without reference to meaning at all. In general, strict scrutiny applies to laws that attempt to stifle “speech on account of its message.”97 In such situations, the government must demonstrate both that there is a compelling interest in restricting the speech and that it has chosen the least restrictive means of furthering that compelling interest.

When the government targets not subject matter but particular views taken by speakers on a subject, the violation of the First Amendment is all the more blatant.98 Viewpoint discrimination is thus an egregious form of content discrimination. The government must abstain from regulating speech when the specific motivating ideology or the opinion or perspective of the speaker is the rationale for the restriction.99 Such scrutiny is so stringent that it applies to otherwise “unprotected” speech.100

However, the government often regulates speech without reference to its meaning or communicative impact. So-called content-neutral speech regulation is subject to intermediate rather than strict scrutiny “because in most cases they pose a less substantial risk of excising certain ideas or viewpoints from the public dialogue.”101 Such laws “do not pose such inherent dangers to free expression, or present such potential for censorship or manipulation, as to justify application of the most exacting level of First Amendment scrutiny.”102

Intermediate scrutiny comes in two main flavors. Content-neutral regulation of the time, place, or manner of speech must be narrowly tailored to serve a significant government interest and leave open ample alternative channels for communication.103 Content-neutral regulation in other contexts must further a substantial government interest and not burden substantially more speech than necessary to further that interest.104

A third type of intermediate scrutiny is the “secondary effects” doctrine, which operates as an exception to the general rule that content-based speech regulation is subject to strict scrutiny. The doctrine asks whether the harm is attributable to the communicative aspects of the speech; if not, the regulation is said to be aimed not at speech but rather at its “secondary effects,” and the regulation is deemed content-neutral.105 Thus, city zoning ordinances have been upheld even though they applied only to theaters that showed adult films because their purposes of preventing crime and protecting property values and the quality of urban life were deemed unrelated to speech content. This doctrine has been criticized for permitting speech suppression “whenever censors can concoct ‘secondary’ rationalizations for regulating the content of political speech.”106 The main limit on the doctrine is the rule that “listeners' reaction to speech is not a content-neutral basis for regulation.”107

72.2.3.3 Types of Procedural Scrutiny: Vagueness and Overbreadth.

Laws generally may not “set a net large enough to catch all possible offenders, and leave it to the courts to step inside and say who could be rightfully detained, and who should be set at large.”108 This hostility to vague laws is greater when speech is concerned; courts will assume that ambiguous legal language will be used against speech.109 The void-for-vagueness doctrine emphasizes both fair notice to citizens of what conduct is prohibited and clear standards for law enforcement to follow.110

A closely related doctrine is that of overbreadth. A law is overbroad when it “does not aim specifically at evils within the allowable area of [government] control, but … sweeps within its ambit other activities that constitute an exercise” of First Amendment rights.111 The danger is “not merely the sporadic abuse of power by the censor” but the “continuous and pervasive restraint on all freedom of discussion that might be reasonably regarded as within its purview.”112 Accordingly, the overbreadth doctrine permits facial113 invalidation of laws that inhibit speech if impermissible applications of the law are substantial, when “judged in relation to the statute's plainly legitimate sweep.”114 Both doctrines exemplify the basic precision principle.

72.2.4 Exceptions Where Speech Can Legally Be Limited.

The government can restrict speech both directly and indirectly. Roughly speaking, the government is more limited in direct regulation of speech than in indirect regulation. First Amendment law also contains many subdomains with their own special rules, such as over-the-air broadcasting,115 government enterprises,116 and government subsidies.117

72.2.4.1 Neutral Laws of General Applicability.

Many laws, in some sense, restrict speech to an extent. Absent special circumstances, “generally applicable” laws can be applied to speech without triggering First Amendment scrutiny. Classic examples are laws against speeding applied to a news reporter hurrying to file a story or taxes that apply to books and newspapers as well as other goods.

The “generally applicable” doctrine contains its own limits. Courts will look to see whether the law appears to be a pretext for targeting speech and whether it was based on perceived secondary effects.118 If so, then there will be some level of First Amendment scrutiny.

72.2.4.2 Constitutionalization of Crimes and Torts: Advocacy of Illegality, Threats, and Defamation.

Another major category of First Amendment doctrines relates to crimes and torts directed at speech. In general, the First Amendment does not prevent the application of criminal law to speech simply because it is speech. Although agreeing to participate or assist in doing an unlawful act is usually accomplished through “speech,” the larger course of conduct in which that speech is integrated overrides any potential First Amendment applicability.119 Even so, there can be concern for speech-suppressive effects.

Criminal laws that are likely to involve speech “must be interpreted with the commands of the First Amendment clearly in mind.”120 As discussed in Section 72.2.2.1, laws that criminalize incitement to unlawful action must require proof of both intent and harm. Similarly, threatening may be criminally punished if the law is carefully defined and applied to “true threats” but not “political hyperbole.”121 Recently, the Supreme Court held that federal law prohibiting the disclosure of unlawfully intercepted communications could not be applied to those who had innocently acquired the information of public concern.122

Tort laws that target speech also must be construed and applied in light of First Amendment concerns. The prime example is libel or defamation law. For many years, the First Amendment simply did not apply to libel suits. In 1967, however, the Supreme Court held that libel suits were a form of government action because courts must enforce libel judgments, and thus imposed First Amendment restrictions on such lawsuits.123

72.2.4.3 “Fighting Words,” Obscenity, and Child Pornography (So-Called Unprotected Speech).

Certain categories of speech, such as obscenity, child pornography, “fighting words,” and so on, are considered “unprotected” even though they are linguistic or pictorial. Here the issue is not that the speech is not communicative but that its value is low; “fighting words,” for instance, are deemed “utterances [that] are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”124

Even so, the definition of “fighting words” is closely limited. First, the words must “by their very utterance inflict injury or tend to incite an immediate breach of the peace.”125 Second, they must be “directed to the person of the hearer.”126

The Supreme Court defines obscenity as works that, (1) taken as a whole, appeal to the prurient interest of the average person in sex, (2) portray sexual conduct in a patently offensive way, and (3) taken as a whole, lack serious literary, artistic, political, or scientific value.127 This definition illustrates how the Supreme Court has sought to ensure that even laws that target unprotected speech do not reach too far. In particular, the first prong of the obscenity test is evaluated with reference to “contemporary community standards.” As the Supreme Court observed, “It is neither realistic nor constitutionally sound to read the First Amendment as requiring that the people of Maine or Mississippi accept public depiction of conduct found tolerable in Las Vegas, or New York City.”128 Moreover, any statute regulating obscenity must define the “sexual conduct” at issue.

Child pornography is defined generally as works that visually depict sexually explicit conduct by children below a specified age.129 As with obscenity, the statute must clearly define the conduct that may not be depicted130 and must require some element of knowledge or intent. Unlike obscenity, however, legally regulable child pornography need not appeal to the prurient interest of the average person, need not portray sexual conduct in a patently offensive manner, and need not be considered as a whole.131 Child pornography is the least protected type of speech. Note that virtual child pornography is not regulated. The U.S. Supreme Court clarified the distinction, noting that in actual child pornography, the perpetrator commits child abuse in the creation of the pornography; no such abuse is present in the virtual version.132

But even “unprotected” speech is not entirely invisible to the First Amendment. Regulations that target such speech must be based on the reason why the speech is categorized as unprotected; thus, even though “fighting words” may be regulated because of their potential to generate violence, regulating only “fighting words” that relate to race, color, creed, religion, or gender would “impose special prohibitions on those speakers who express views on disfavored subjects,” amounting to unconstitutional viewpoint discrimination.133

72.2.4.4 Less-Protected Speech, Media, and Recipients.

Other categories of speech are protected by the First Amendment but may be regulated more easily. The main examples are commercial speech, broadcast speech, and speech that is “indecent” or “harmful to minors.”134

Commercial speech is defined as speech that proposes a commercial transaction; the main example is advertising. Originally, such speech was considered unprotected, but today its regulation is subject to a form of intermediate scrutiny.135

Broadcast speech is also subject to a form of intermediate scrutiny.136 The Supreme Court has long adhered to the view that different media of expression may require different First Amendment analyses. The standard for regulating the speech of cablecasters is not entirely clear.137

With regard to sexual content, there are three forms of less-protected speech. First, although nonobscene sexual expression is in a sense “fully” protected, cities may rely on “secondary effects” analysis to regulate the location of bookstores and theaters that sell “adult” material or offer “adult” entertainment like nude dancing.138

Second, the Supreme Court has permitted substantial regulation of “indecent” speech over the airwaves139 and on cable TV.140 Indecency regulation is closely tied to the interest of preventing minors' access to sexual material. As that interest varies, so does the latitude for regulation. For example, the regulation of indecent speech delivered telephonically was not permitted.141 Thus, indecency is an odd speech category that is regulated primarily with respect to the audience: The government has a strong interest in regulating minors' access to indecency, but it has no such interest in regulating adults' access.

Closely related to the category of “indecency” is materials judged “harmful to minors” (HTM). Roughly speaking, this category is like obscenity, but geared to those under the age of 17.142 This category has become increasingly important in the censorware context given the “full” protection accorded to speech over the Internet. Courts have narrowly tailored HTM-based access restrictions to their constitutional boundaries. This point was made clear in Reno v. ACLU, where the Supreme Court held unconstitutional a general prohibition of indecent speech on the Internet. The Court found:

  • Parents may disseminate HTM speech to their children.143
  • The HTM concept applies only to commercial transactions.144
  • The government may not simply ban minors' exposure to a full category of speech, such as nudity, when only a subset of that category can plausibly be deemed HTM.145
  • The government interest is not equally strong throughout the HTM age range.146

72.2.5 Legislation and Legislative Initiatives in the United States.

Since the advent of the Internet, there have been many attempts to regulate access to Internet speech. Such legislation can be divided into two basic categories. In the first category, the law imposes liability for communicating certain kinds of material over the Internet. Such laws did not require the use of software to block Internet sites or to filter their content because they focused on the conduct of the speaker or publisher. In the second category, the law requires or encourages the use of filtering software by those who make Internet content available.

In 1996, Congress enacted the Communications Decency Act (CDA), which sought to regulate the communication of “indecent” material over the Internet. The CDA prohibited Internet users from using the Internet to communicate material that, under contemporary community standards, would be deemed patently offensive to minors under the age of 18. In holding that the CDA violated the First Amendment, the Supreme Court explained that without defining key terms, the statute was unconstitutionally vague. Moreover, the Court noted that the breadth of the CDA was “wholly unprecedented” in that, for example, it was “not limited to commercial speech or commercial entities” but rather embraced “all nonprofit entities and individuals posting indecent messages or displaying them on their own computers.”

In response to the Supreme Court's invalidation of the CDA, Congress in 1998 enacted the Child Online Protection Act147 (COPA). The COPA scaled down the restrictions of the CDA by using the “harmful to minors” category of speech. The COPA was found unconstitutional by a district court because it failed the strict scrutiny test for content-based regulation of speech.148 That holding was affirmed on appeal although on a different rationale—that because “harmful to minors,” like obscenity, must be defined by reference to discrete, diverse, geographically defined communities, COPA's attempt to promulgate a national “harmful to minors” standard was necessarily unconstitutional.149 States also have attempted to control access to Internet speech by using the “harmful to minors” concept, but these attempts have proven similarly unsuccessful.150

The second category is exemplified by the Child Internet Protection Act (CIPA),151 which, roughly speaking, requires all schools and libraries that receive certain types of federal funding to adopt and implement an Internet safety policy and to use “technology protection measures” on all computers that offer Internet access. Such measures must block or filter Internet access to visual depictions that are obscene, child pornography, or harmful to minors. Libraries must also regulate access by minors to “inappropriate material.” Thus, the CIPA effectively requires the use of filtering software, sometimes referred to as “censorware.”

72.2.6 Attempts to Control Access: Case Law.

The American Library Association has taken a firm stance against censorware in libraries152 and has committed itself to challenging federal censorware requirements. To date, though CIPA applies to both libraries and schools, the legal battle over the use of censorware to control access to speech on the Internet has primarily involved libraries. The plaintiffs in these cases have library and librarian associations, including the American Library Association, library patrons and users, and entities and individuals whose Web sites are likely to be blocked by censorware.

72.2.6.1 Public Libraries.

On one hand, public libraries must be concerned that, by implementing censorware, they violate patrons' First Amendment right to receive information. On the other hand, public libraries also face liability for encouraging a hostile workplace for their employees by not implementing censorware on library Internet terminals.

As to the first concern, libraries face a conflict between not censoring adults' access to constitutionally protected material on the Internet and protecting children from inappropriate content. In a 1998 case involving a Virginia library, a federal district court found the library's blocking policy unconstitutional.153 The challenged policy required that “site-blocking software … be installed on all [library computers]” so as to block child pornography, obscene material, and harmful-to-minors material.154 The defendants implemented the policy by installing the commercial filtering software X-Stop on all public library terminals in the county.

The court first concluded that the public libraries were limited public forums and then subjected the defendants' indisputably content-based filtering policy to strict constitutional scrutiny.155 Assuming that the library's interests were compelling, the court found that its policy was not the least restrictive means of furthering those interests. First, the court held that the library had not even considered less restrictive means, including privacy screens, library staff monitoring of Internet use (which had been used to enforce other library policies), and filtering software installed only on Internet terminals used by minors or only when minors are using them.

More important, the court found that the censorware-based policy violated the First Amendment because censorware is both overbroad and overinclusive. On one hand, the X-Stop censorware blocked more speech than that targeted by the policy.156 On the other hand, even if censorware could somehow be tailored to exclude only restricted materials, it unconstitutionally “limit[s] the access of all patrons, adult and juvenile, to material deemed fit for juveniles.”157

Finally, the court found that the library policy violated the doctrine barring prior restraints. Mandatory filtering policies that rely on commercial blocking software arguably constitute prior restraints because, as in Mainstream Loudoun, they “entrust all … blocking decisions … to a private vendor” whose standards and practices cannot be monitored by the filtering library.158

In a more recent and notable departure from this line of argument, the U.S. Supreme Court found in a 2003 case from Pennsylvania that CIPA was constitutional because the library's interest in preventing minors from accessing obscene material “outweighs the free speech rights of library patrons and website publishers.”159

As to the second concern, libraries are unlikely to face liability from patrons for failing to use censorware. For example, in Kathleen R. v. City of Livermore,160 a woman sued a public library, complaining that her 12-year-old son was able to view and download pornography at a public library in Livermore, California. The plaintiff sought an injunction that would prohibit the library “from maintaining any computer system on which it allows people to access…obscene material or on which it allows minors to access … sexual material harmful to minors.” The plaintiff also claimed that the library had a constitutional duty to “protect” library patrons from unwanted and “harmful” sexually explicit materials. The California court of appeals rejected all of the plaintiff's claims.

A different concern here is library liability to employees for failing to use censorware under sexual harassment “hostile work environment” theories. Federal civil rights laws and many states' parallel civil rights laws afford employees the “right to work in an environment free from discriminatory intimidation, ridicule and insult.”161 Arguably, access to materials over the Internet that are offensive due to their sexually explicit nature or messages regarding race, religion, or ethnicity may subject a library to liability for a hostile environment under these laws. In 2001, employees of a Minnesota library brought a successful complaint before the Equal Employment Opportunity Commission claiming that the library failed to prevent a hostile work environment by failing to prevent the viewing of objectionable material on library computers.162

72.2.6.2 Public Schools.

There are no cases involving the use of censorware in public schools. Schools traditionally have been considered places where the government has a strong interest in regulating speech,163 particularly with respect to curricular materials.164 Prevailing cases emphasize the importance of inculcating values and maintaining civility in schools, along with the risk that the viewpoints expressed by student speech would be attributed to the school. Moreover, most public school students are too young to enjoy the same constitutional rights as adults.

Nevertheless, the First Amendment should place significant limits on public schools' use of censorware. In general, students do not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.”165 Thus, while schools have maximum authority over curriculum and school-sponsored activities and substantial authority to regulate student speech, they have less authority to regulate students' right to receive information. Despite schools' “comprehensive authority” to prescribe and control student conduct, “students may not be regarded as closed-circuit recipients of only that which the State chooses to communicate.”166

Moreover, the extent of governmental power to regulate conduct of minors not constitutionally regulable when committed by adults is a “vexing” question, “perhaps not susceptible of precise answer.”167 Although the government has a general interest in regulating schools, the interest in regulating students' access to “objectionable” extracurricular information is of uncertain pedigree. For instance, the Supreme Court has characterized the government interest in protecting children against “harmful to minors” material as derivative of or secondary to parents' interests.168 Thus, “[s]peech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.”169 This characterization suggests that the state has no interest in blocking a minor's access to HTM material to which parents do not object.170

Moreover, the state's interest is not uniformly strong across all minors. In general, minors' fundamental rights, including the right to receive information, strengthen as they grow older; “constitutional rights do not mature and come into being magically only when one attains the state-defined age of majority.”171 Moreover, some high school students are 18 or 19 and thus not minors for HTM purposes.

The Electronic Freedom Foundation published a report in 2003 that examined the issue of Internet content filtering in public schools. Based on the state of current filtering technology, which significantly underblocks material that would be considered harmful to minors and significantly overblocks some material that would be considered protected by the First Amendment, the EFF recommended that schools should use other, less restrictive methods.172

72.3 PARENTAL INVOLVEMENT/RESPONSIBILITY.

Although the technical issues of censorware are identical regardless of whether it is being applied by government, employers, or parents, the parent-child case generates much of the political discussion. Two conflicting mental models are at work in the debate.

The primary impetus for censorware arises out of an idea that might be called the “toxic material” theory, under which viewing certain information has a toxic effect, akin to a harmful drug or a poison. As was stated in a lawsuit seeking an injunction against a library with unrestricted Internet access: “Children such as Brandon P. who view obscenity and pornography on the library's computers can and have sustained emotional and psychological damage in addition to damage to their nervous systems. It is highly likely that such damage will occur given the library's policy.”173 The outcome of the toxic material threat model has been legislation such as the CIPA.

In contrast, theoretical discussion typically takes place in a model that can be termed the “control-rights” theory. This is concerned primarily with authority relationships within some framework. One paper submitted to a government commission stated:

[T]he decision by a third party that a person may not use a computer to access certain content from the Internet demands some sort of justification. The burden should be on the filterer to justify the denial of another person's access. The most plausible justifications for restricting access are that the third party owns the computer or that the third party has a relation of legitimate authority over the user.174

However, the control-rights discussion does not address the assumptions and threat model inherent in the toxic material theory. A belief in harmful information logically requires restrictions to be as extensive and widespread as possible. In fact, this is an instance of typical security practices, where the potential attack is viewed along the lines of dangerous contamination. All possible environments where the subject may be located then need to be secured to the greatest extent possible. This social need is similar to the technical requirement regarding banning privacy, anonymity, and even language-translation Web sites. It is all a matter of security holes.

72.4 SUMMARY.

Although the Internet has the technological potential to create a global forum for free expression, one must not underestimate the political and technological barriers at work.

Internet-specific laws. Some governments have criminalized certain types of Internet speech. Such criminal penalties may come in the form of laws intended to protect minors from “harmful” material. Under U.S. law, however, requiring Internet speakers to shield certain populations from their speech has been found to effect a ban on that speech.175

Application of existing laws. Governments need not specifically enact laws targeting Internet-based speech. For example, the German government action against CompuServe for providing access to illegal material merely applied existing laws to the Internet.

Content-based license terms applied to Internet users and service providers. Some countries have established licensing systems that require Internet users and/or service providers to agree to refrain from certain kinds of speech or that block access to speech as a condition of having, or using, or providing access to the Internet. For instance, China has issued rules requiring anyone with Internet access to refrain from proscribed speech.

Compelled use of computerized censorship, rating, or content labeling tools, Various techniques can prevent individuals from using the Internet to exchange information on topics that may be controversial or unpopular, may enable certain governments to enforce their own ratings systems, and may (through overblocking by overly broad filtering techniques) inappropriately block access to significant amount of nonobjectionable content.

72.5 FURTHER READING

Electronic Privacy Information Center, Filters and Freedom 2.0 (2001): http://epic.org/bookstore/filters2.0/default.html.

Semitsu, Junichi P. “Note, Burning Cyberbooks in Public Libraries: Internet Filtering Software vs. the First Amendment,” 52 Stan. L. Rev. 509 (2000).

Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999.

OpenNet Initiative. Documenting Internet Content Filtering Worldwide. http://opennet.net/.

Sullivan, Kathleen M. “First Amendment Intermediaries in the Age of Cyberspace,” 45 UCLA L. Rev. 1653 (1998).

Weinberg, Jonathan. “Rating the Net,” 19 Hastings Comm. & Ent. L.J. 453 (1997).

72.6 NOTES

1. Universal Declaration of Human Rights, art. 19, G.A. Res 217A, U.N. GAOR, 3d Sess., U.N. Doc. A/810 (1948).

2. Thomas D. Jones, “Human Rights: Freedom of Expression and Group Defamation Under British, Canadian, Indian, Nigerian and United States Law—A Comparative Analysis,” 18 Suffolk Transnat'l L. Rev. 427, 428 (1995) (footnotes omitted).

3. Samuel Fifer and Michael Sachs, “The Price of International Free Speech: Nations Deal With Defamation on the Internet,” 8 DePaul-LCA J. Art & Ent. L. 1, 2 (1997).

4. Samuel Fifer and Michael Sachs, “The Price of International Free Speech: Nations Deal With Defamation on the Internet,”

5. Order of the County Court of Paris, at 2, in La Ligue Contre Le Racisme Et L'Antisemitisme v. Yahoo! Inc., No. RG: 00/05308 (Nov. 20, 2000) (translated).

6. Order of the County Court of Paris, at 4.

7. Order of the County Court of Paris, at 4.

8. www.cdt.org/jurisdiction/010607yahoo.pdf.

9. Yahoo! Inc. v. La Ligue Contre le Racisme et L'Antisemitisme, et al., 145 F. Supp. 2d 1168 (N.D. Cal. 2001).

10. See note 3 generally.

11. Jane Ginsburg, “Copyright Without Borders? Choice of Forum and Choice of Law for Copyright Infringement in Cyberspace,” 15 Cardozo Arts & Ent. L.J. 153 (1997).

12. Peter Swire, “Of Elephants, Mice, and Privacy: International Choice of Law and the Internet,” 32 Int'l Law. 991 (1998).

13. See text at Section 72.2.5.

14. See text at Section 72.1.3.3.2.

15. John T. Delacourt, “Recent Development: The International Impact of Internet Regulation,” 38 Harv. Int'l L. J. 207, 212 (1997).

16. Kim Rappaport, Note, “In the Wake of Reno v. ACLU: The Continued Struggle in Western Constitutional Democracies with Internet Censorship and Freedom of Speech Online,” 13 Am. U. Int'l L. Rev. 765, 791 (1998).

17. Kim Rappaport, Note, at 795–799.

18. Some observers reported that the Internet seems to foster “hate” sites. Laura Leets, “Responses to Internet Hate Sites: Is Speech Too Free in Cyberspace?” 6 Comm. L. & Pol'y 287, 288 (2001) (noting that watchdog groups have documented about 2,800 hate sites).

19. Id. at 291–294.

20. See note 16, at 785–786 (footnotes omitted).

21. Id. at 789 (footnotes omitted).

22. Steve Kettmann, “German Hate Law: No Denying It,” December 15, 2000: www.wired.com/news/politics/0,1283,40669,00.html.

23. www.wired.com/news/politics/0,1283,40669,00.html.

24. www.wired.com/news/politics/0,1283,40669,00.html.

25. J. Zittrain and B. Edelman, “Localized Google Search Result Exclusions” (2002); http://cyber.law.harvard.edu/filtering/google/.

26. Canadian Criminal Code § 319(2), R.S.C. 1985, c. C-46 (willful promotion of hatred against an identifiable group); see id. at § 181 (willful publication of false statements likely to injure a public interest).

27. R. v. Keegstra, 3 S.C.R. 697 (1990).

28. See note 27.

29. Robert Cribb, “Canadian Net Hate Debate Flares,” March 25, 1998: www.wired.com/news/news/politics/story/11195.html (“Cribb”).

30. Zundel was charged with spreading “false news” in violation of § 181 of the Canadian Criminal Code.

31. R. v. Zündel [1992] 2 S.C.R. (Can.) 731.

32. Credence Fogo-Schensul, “More Than a River in Egypt: Holocaust Denial, the Internet, and International Freedom of Expression Norms,” 33 Gonz. L. Rev. 271, 245 n.32 (1997–1998) (citation omitted).

33. “Ernst Zundel Sentenced to 5 Years in Prison for Holocaust Denial,” Winnipeg Free Press, February 15, 2007, online edition: www.winnipegfreepress.com/breakingnews/world/story/3881650p-4489229c.html.

34. Internet World Stats, March 24, 2007: www.internetworldstats.com/asia/cn.htm.

35. “China Again Seeks to Control the Internet,” December 6, 2000, www.bizasia.com/gen/articles/stand_art.htm?ac=EKBGN-Y.

36. Managing Internet Information-Release Services, P.R.C. Ministry of Information Industry Regulation, November 7, 2000; see also “China Issues Regulations on Managing Internet Information-Release Services,” China Online, November 13, 2000; www.chinaonline.com/issues/InterneLpolicy/NewsArchive/Secure/2000/November/C00110604.asp. Other restrictions target a variety of disfavored groups, particularly supporters of the Falun Gong spiritual movement. See “China Passes Internet Security Law,” China Online, December 29, 2000; www.chinaonline.com/issues/InterneLpolicy/NewsArchive/Secure/2000/December/C00122805.asp.

37. Geremie R. Barme and Sang Ye, “The Great Firewall of China” (June 1997), www.wired.com/wired/archive/5.06/china.html?person=bill_gates&topic_set=wiredpeople.

38. www.wired.com/wired/archive/5.06/china.html?person=bill_gates&topic_set=wiredpeople.

39. Digital Freedom Network, “Attacks on the Internet in China,” April 30, 2001: www.dfn.org/focus/china/netattack.htm.

40. www.dfn.org/focus/china/netattack.htm

41. www.dfn.org/focus/china/netattack.htm

42. www.dfn.org/focus/china/netattack.htm

43. www.dfn.org/focus/china/netattack.htm

44. www.dfn.org/focus/china/netattack.htm

45. www.dfn.org/focus/china/netattack.htm

46. www.dfn.org/focus/china/netattack.htm

47. www.dfn.org/focus/china/netattack.htm

48. www.dfn.org/focus/china/netattack.htm

49. www.dfn.org/focus/china/netattack.htm

50. www.dfn.org/focus/china/netattack.htm

51. www.dfn.org/focus/china/netattack.htm

52. www.dfn.org/focus/china/netattack.htm

53. Michael Liedtken, “Google Agrees to Censor Results in China,” January 24, 2006; www.breitbart.com/article.php?id=D8FBCF686&show_article=1.

54. Human Rights Watch, “The Internet in the Middle East and North Africa: Free Expression and Censorship,” hrw.org/advocacy/internet/mena/.

55. Saudi Internet regulations, Saudi Arabia Council of Ministers Resolution, February 12, 2001, www.al-bab.com/media/docs/saudi.htm.

56. Human Rights Watch World Report 1999, “Freedom of Expression on the Internet,” www.hrw.org/hrw/worldreport99/special/Internet.html.

57. “Turkey Revokes YouTube Ban” The Age.com, March 10, 2007; www.theage.com.au/news/Technology/Turkey-revokes-YouTube-ban/2007/03/10/1173167025391.html.

58. Christina Fink, Burma: “Constructive Engagement in Cyberspace?” (1997), www.burmafund.org.

59. Digital Freedom Network, “The New Net Regulations in Burma,” January 31, 2000, www.dfn.org/voices/burma/webregulations.htm.

60. www.dfn.org/voices/burma/webregulations.htm.

61. Alison Hunter, “Burma's Internet Censorship Worsening,” October 12, 2005; www.mizzima.com/mizzima/archives/news-in-2005/News-in-Oct/12-Oct-05-40.htm.

62. Digital Freedom Network, “Burma Wards Off the Internet Revolution,” January 31, 2000; www.dfn.org/focus/burma/Web-crackdown.htm.

63. The notion of “unprotected” speech is discussed in Section 72.2.4.3.

64. In the context of the right to speak, the requirement of state action is somewhat relaxed compared to other constitutional rights. For example, when private persons—not the government—use the courts against another private person's speech, “state action” is usually found. E.g., New York Times Co. v. Sullivan, 376 U.S. 254 (1964) (defamation); Hustler Magazine, Inc. v. Falwell, 485 U.S. 46 (1988) (intentional infliction of emotional distress).

65. Texas v. Johnson, 491 U.S. 397, 403 (1989).

66. W. Va. Bd. of Educ. v. Barnette, 319 U.S. 624, 632 (1943).

67. McIntyre v. Ohio Elections Comm'n, 514 U.S. 334 (1995).

68. Lamont v. Postmaster General, 381 U.S. 301, 307 (1965) (invalidating law requiring willing recipient to request that certain, state-defined materials be sent to him); Virginia Pharmacy Bd. v. Virginia Consumer Council, 425 U.S. 748, 756 (1974).

69. City of Lakewood v. Plain Dealer Publishing Co, 486 U.S. 750 (1988).

70. Buckley v. Valeo, 424 U.S. 1, 19 (1976).

71. Simon & Schuster, Inc. v. Members of N.Y. State Crime Victims Bd., 502 U.S. 105, 117(1991).

72. City of Ladue v. Gilleo, 512 U.S. 43 (1994).

73. Abrams v. United States, 250 U.S. 616, 630 (1919) (Holmes, J., dissenting).

74. Hurley v. GLIB, 515 U.S. 557 (1995).

75. Cohen v. California, 403 U.S. 15, 24 (1971) (emphasis added).

76. Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 66 (1963).

77. Thornhill v. Alabama, 310 U.S. 88, 97 (1940). (“It is not merely the sporadic abuse of power by the censor but the pervasive threat inherent in its very existence that constitutes the danger to freedom of discussion.”)

78. Schenck v. United States, 249 U.S. 47, 52 (1919).

79. Whitney v. California, 274 U.S. 357, 377 (1926) (Brandeis, J., concurring).

80. Brandenburg v. Ohio, 395 U.S. 444, 447 (1969).

81. American Booksellers Assn., Inc. v. Hudnut, 771 F.2d 323, 333 (7th Cir. 1985), aff'd. mem. 475 U.S. 1001 (1986).

82. See also Planned Parenthood v. American Coalition of Life, 244 F.3d 1007 (9th Cir. 2001).

83. New York Times Co. v. Sullivan, 376 U.S. 254, 292 (1964).

84. Police Dept. of Chicago v. Mosley, 408 U. S. 92, 96 (1972).

85. NAACP v. Button, 371 U.S. 415, 433 (1963).

86. See note 69.

87. Smith v. California, 361 U.S. 147 (1959).

88. Speiser v. Randall, 357 U.S. 513 (1958).

89. Gertz v. Robert Welch, Inc., 418 U.S. 323 (1974).

90. Organization for a Better Austin v. Keefe, 402 U.S. 415, 419 (1971).

91. New York Times Co. v. United States, 403 U.S. 713, 729 (1971) (Stewart, J., concurring).

92. Nebraska Press Ass'n v. Stuart, 427 U.S. 539 (1976).

93. Staub v. Baxley, 355 U.S. 313, 322 (1958).

94. Niemotko v. Maryland, 340 U.S. 268, 271 (1951).

95. Freedman v. Maryland, 380 U.S. 51, 58 (1965).

96. Id. at 58–59.

97. See, e.g., United States v. Playboy Entertainment Group, Inc., 120 S. Ct. 1878, 1880 (2000); Reno v. ACLU, 521 U.S. 844 (1997).

98. R.A.V. v. St. Paul, 505 U. S. 377, 391 (1992).

99. Perry Ed. Assn. v. Perry Local Educators' Assn., 460 U. S. 37, 46 (1983).

100. R.A.V., supra (government may not regulate otherwise unprotected “fighting words” based on viewpoint).

101. Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622, 642 (1994) (citations omitted).

102. Id. at 661.

103. Clark v. Community for Creative Non-Violence, 468 U.S. 288, 293 (1984).

104. United States v. O'Brien, 391 U.S. 367, 376–77 (1968)

105. City of Renton v. Playtime Theatres, Inc., 475 U.S. 41, 47 (1986)

106. Boos v. Barry, 485 U.S. 312, 335 (1988) (Brennan, J., concurring).

107. Forsyth County v. Nationalist Movement, 505 U.S. 123, 135 (1992).

108. United States v. Reese, 92 U. S. 214, 221 (1876).

109. NAACP v. Button, 371 U.S. at 337.

110. Kolender v. Lawson, 461 U.S. 352, 357–358 (1983).

111. Thornhill v. Alabama, 310 U.S. 88, 97 (1940).

112. Id. at 97–98.

113. Often, laws are struck down only “as applied,” that is, the particular application of the law to a particular person is found to violate the Constitution. Facial invalidation, on the other hand, is not limited to the particular application but to the entire law, roughly speaking. See, e.g., Richard H. Fallon, Jr., “As-Applied and Facial Challenges and Third-Party Standing,” 113 Harv. L. Rev. 1321 (2000).

114. Broadrick v. Oklahoma, 413 U. S. 601, 612–615 (1973).

115. Red Lion Broadcasting Co. v. FCC, 395 U.S. 367 (1969).

116. Int'l Soc'y for Krishna Consciousness, Inc. v. Lee, 505 U.S. 672, 678 (1992).

117. Legal Services Corporation v. Velazquez, 531 U.S. 533, (2001).

118. Arcara v. Cloud Books, Inc., 478 U.S. 697, 707 n.4 (1986).

119. Giboney v. Empire Storage & Ice Co., 336 U.S. 490, 502 (1949).

120. Watts v. United States, 394 U.S. 705, 707 (1969).

121. Watts v. United States, 394 U.S. at 707–8.

122. Bartnicki v. Vopper, 121 S.Ct. 1753, 1765 (2001).

123. New York Times Co. v. Sullivan, 376 U.S. 254 (1964).

124. Chaplinsky v. New Hampshire, 315 U.S. 568, 571 (1942).

125. Chaplinsky v. New Hampshire, at 572.

126. Cohen v. California, 403 U.S. 15, 20 (1971).

127. Miller v. California, 413 U.S. 15, 24 (1973).

128. Miller v. California, 413 U.S. at 32.

129. New York v. Ferber, 458 U.S. 747, 764 (1982).

130. The federal child pornography statute reaches “any visual depiction” of a minor under 18 years old engaging in “sexually explicit conduct,” which includes “actual or simulated” sexual intercourse, bestiality, masturbation, sadistic or masochistic abuse, or “lascivious exhibition of the genitals or pubic area.” 18 U.S.C. § 2256.

131. Ferber, 458 U.S. at 764–765. Notably, the rationale for regulating child pornography is not content-based. Instead, it is based on the harm to children from being subjects of sexual performances, as well as the harm from the distribution of such photographs.

132. Ashcroft v. The Free Speech Coalition, 122 S. Ct. 1389 (2002).

133. R.A.V. v. City of St. Paul, 505 U.S. 377, 391 (1992).

134. Ginsberg v. New York, 390 U.S. 629 (1968).

135. Liquormart, Inc. v. Rhode Island, 517 U.S. 484 (1996).

136. Red Lion Broadcasting Co. v. FCC, 395 U.S. 367 (1969).

137. Speech on the Internet, however, is entitled to “full” First Amendment protection.

138. City of Erie v. Pap's A.M., 120 S.Ct. 1382 (2000).

139. FCC v. Pacifica, 438 U.S. 726 (1978).

140. Denver Area Educ. Telecom. Consortium v. FCC, 518 U.S. 727 (1996); but see note 97.

141. Sable Communications v. FCC, 492 U.S. 115 (1989).

142. Speech is “harmful to minors” if it (i) is “patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable … for minors”; (ii) appeals to the prurient interest of minors; and (iii) is “utterly without redeeming social importance for minors.” Ginsberg v. New York, 390 U.S. 629, 633 (1968) (upholding conviction of magazine vendor for selling adult magazine to 16-year-old).

143. Reno, 521 U.S. at 865.

144. Reno, 521 U.S.

145. Erznoznik v. City of Jacksonville, 422 U.S. 205, 212–214 (1975).

146. Reno, 521 U.S. at 878. Lower courts have held that “if a work is found to have serious literary, artistic, political or scientific value for a legitimate minority of normal, older adolescents, then it cannot be said to lack such value for the entire class of juveniles taken as a whole.” American Booksellers Ass'n v. Webb, 919 F.2d 1493, 1504–5 (11th Cir. 1990) (quoting American Booksellers Ass'n v. Virginia, 882 F.2d 125, 127 (4th Cir. 1989) (other citations omitted).

147. Pub. L. No. 105-277, 112 Stat. 2681 (1998) (codified at 47 U.S.C. § 231).

148. ACLU v. Reno, 31 F. Supp. 2d 473 (E.D. Pa. 1999).

149. ACLU v. Reno, 217 F.3d 162, 177 (3d Cir. 2000), cert. granted sub. nom. Ashcroft v. ACLU, [121 S.Ct. 1997] (2001), argued Nov. 28, 2001.

150. E.g., ACLU v. Johnson, 194 F.3d 1149 (10th Cir. 1999) (affirming injunction against enforcement of New Mexico statute criminalizing dissemination by computer of material harmful to minors); American Libraries Ass'n v. Pataki, 969 F. Supp. 160 (S.D. N.Y. 1997) (enjoining a New York statute similar to the CDA that criminalized the use of a computer to disseminate sexually explicit materials to minors)

151. Codified at 47 U.S.C. § 254(h) and 20 U.S.C. § 9134.

152. The ALA's Resolution on the Use of Filtering Software in Libraries is available from the ALA Office for Intellectual Freedom.

153. Mainstream Loudoun v. Bd. of Trustees of the Loudoun County Library, 2 F. Supp. 2d 783 (E.D. Va.1998) (Mainstream Loudoun I); Mainstream Loudoun v. Bd. of Trustees of the Loudoun County Library, 24 F. Supp. 2d 552 (E.D. Va. 1998) (Mainstream Loudoun II).

154. Mainstream Loudoun I, 2 F. Supp. 2d at 787.

155. Mainstream Loudoun II, 24 F. Supp. 2d at 564–565.

156. Id. at 556 (“undisputed” that “sites that do not contain any material that is prohibited by the Policy” were blocked).

157. Mainstream Loudoun II, 24 F. Supp. 2d at 567.

158. Mainstream Loudoun II, 24 F. Supp. 2d at 569.

159. Electronic Freedom Foundation, “Supreme Court Supports Library Internet Blocking Law,” June 23, 2003; www.eff.org/Censorship/Censorware/20030623_eff_cipapr.php

160. 87 Cal. App. 4th 684 (1st Dist. 2001).

161. Meritor Savings Bank FSB v. Vinson, 477 U.S. 57, 65 (1986).

162. “EEOC Determination that Net Porn Leads to Library Hostile Working Environment,” May 23, 2001; www.eff.org/Censorship/Censorware/20010523_eeoc_determination.html.

163. Bethel School District No. 403 v. Fraser, 478 U.S. 675 (1986) (upholding suspension of high school student for making sexually suggestive speech at school assembly).

164. Hazelwood Sch. Dist. v. Kuhlmeier, 484 U.S. 260 (1988) (upholding teacher's censorship of articles destined for newspaper prepared by journalism class).

165. Tinker v. Des Moines Indep. Community School Dist., 393 U.S. 503, 506 (1969).

166. Id. at 511. Schools may [however] regulate students' First Amendment rights if the prohibited speech would “materially and substantially interfere with the requirements of appropriate discipline in the operation of the school.” Id. at 509. This burden cannot be satisfied by “undifferentiated fear or apprehension of disturbance”; the school must show “something more than a mere desire to avoid the discomfort and unpleasantness that always accompany an unpopular viewpoint.” Id. at 508.

167. Carey v. Population Servs. Int'l, 431 U.S. 678, 692 (plurality opinion) (1977).

168. Reno v. ACLU, 521 U.S. 844 (1997); Ginsberg v. New York, 390 U.S. 629 (1968); Prince v. Massachusetts, 321 U.S. 158, 166 (1944).

169. Erznoznik, 422 U.S. at 213–214.

170. In other contexts, schools often accommodate or try to accommodate parental wishes as to the exposure of their children in school to undesired material, even when the material is part of the school's chosen curriculum. Catherine J. Ross, “An Emerging Right for Mature Minors to Receive Information,” 2 U. Pa. J. Const. L. 223, 247 n.119 (1999) (“nearly every school district in the country allows parents ‘to opt their own children out of sexuality and AIDS education, as well as out of specific activities or assignments that conflict with their religious beliefs.’”), quoting People for the American Way, A Right Wing and a Prayer: The Religious Right and Your Public Schools 60 (1997). In some states, such accommodation with respect to all or specified categories of curricular materials is required. Minn. Stat. Ann. § 126.699 (West 1994) (schools must allow parents to review curricular materials and must provide alternative materials to replace those that parents find objectionable for any reason); Mass. Gen. Laws Ann. ch. LXXI 32A (West 1998) (requiring that school districts notify parents of the content of any curriculum primarily involving “human sexual education or human sexuality” and afford parents a flexible way of exempting their children from such curricula upon written notice to the school); Va. Code Ann. § 22.1-207-2 (Michie 1998) (giving public school parents the right to review “the complete family life curricula [sic], including all supplemental materials”).

171. Planned Parenthood of Cent. Mo. v. Danforth, 428 U.S. 52, 74 (1976) (minors' right to abortion).

172. Electronic Freedom Foundation, “Internet Blocking in Public Schools,” June 23, 2003; www.eff.org/Censorship/Censorware/net_block_report/.

173. First Amended Complaint in Kathleen R. v. City of Livermore, www.techlawjournal.com/courts/kathleenr/19981103.htm.

174. Members of the Information Society Project at Yale Law School, “Filtering the Internet: A Best Practices Model,” September 15, 1999; www.copacommission.org/papers/yale-isp.pdf.

175. Reno v. ACLU, 521 U.S. 844 (1997).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.103.64