Peer Community

A lot of p2p discussion is just about the technology, and one can see definitions of the term that fully reflect that view, such as the one given by Peer to Peer Working Group (www.peer-to-peerwg.org)—perhaps one of the better definitions, even though it seems to ignore the communicative aspect (that is, IM) altogether.

Peer-to-Peer Defined: Peer-to-peer computing is sharing of computer resources and services by direct exchange.

I tried to go beyond this computer-limited view, even though most of this book (especially Part II) is very deep on technological detail, by sandwiching it between history, analogy, and legality views in Part I, and the social and personal views here in Part III. In fact, much of peer technology can be viewed as based on recognizing the value of the individual within a community of users. Realistically, this recognition involves both freedoms and responsibilities. Tim Berners-Lee, credited as creator of the World Wide Web, expressed a core freedom this way:

There’s a freedom about the Internet: As long as we accept the rules of sending packets around, we can send packets containing anything to anyone.

The rules he speaks about are the underlying technical conventions, the protocols that make the infrastructure communication at all possible. Packets of data are pretty impersonal entities until you can interpret and reconstruct the content—the transport medium doesn’t care one way or the other what they represent.

Another freedom exists on the Internet as well, especially relevant to peer application equality. Call it the universal interoperability principle.

Bit 11.1 Standard Internet protocols are the universal level playing field.

So long as a device—any device—obeys the peer network protocols, its size, shape, form, and location are irrelevant. Anyone can play.


However, social rules are implicit in this situation, often unspoken, yet tacitly understood by the particular group that follows them. Like all social rules, they largely depend on your particular peer group’s philosophical outlook. With regard to the Internet’s common currency of content—information—the social rules are what molds how we use it—or conversely, restrict it.

Information might be neutral and free in theory, but in the human context, it is often subservient to other and more value-charged issues. Nor am I neglecting the commercial aspects by stating this relationship because few things are so emotion-and value-laden as money.

But let’s return to the social dimension of interoperability for a moment.

Chapter 1 introduced Metcalfe’s Law, which stated that the value of a peer network was approximately proportional to the square of the number of nodes (written as n2 – n). The law can be seen as a value statement from a purely technical viewpoint, even though we are considering the perceived value of available resources.

From the community point of view, the important thing is groupings between people. The math tells us that the potential number of nontrivial groupings we find in this network is proportional to the exponential of the number of individual nodes (written as 2n – n – 1). This relationship is known as Reed’s Law of networking.

Bit 11.2 Reed’s Law: The social value of the network is proportional to 2n.

The network represents the exponential value of interest group affiliations.


As the resource value (or intercommunication) of a network grows geometrically with a linear increase in the number of nodes, its potential social value (as grouping) therefore grows even faster. We may not always do the math, or be clear about how relevant it is, but intuitively, peer users do feel these value-rich aspects on some level.

Consider if you will the contrasting value of a network consisting of relatively few transmitters of information compared to a great many receivers—the common broadcast or server-to-client model. There, the network value increases only linearly with the number of nodes, because the information flow is unidirectional. This, incidentally, is Sarnoff’s Law of networking. All good things come in threes.

So, transforming into a peer group a cluster of say 10 desktop clients that only connect to Web servers increases their aggregate resource value by about factor 100. However, the potential social value increases by about factor thousand (210) and can double for every additional node. These figures very quickly get mind boggling, if not absurd, with larger clusters, but the implications are clear. It’s not much of a reach to assume that the value experience of the social aspects of peer networking, when people are involved, comes to totally dominate all other considerations even in very small networks. This grouping aspect also extends to all communicative peer situations, even automated ones, whenever such groupings confer added value.

Technology Acceptance

The things that transform the world are often concepts—simple concepts—that come from unexpected directions and somehow, without any real premeditation, end up being “the normal way” to do things. We tend to see only the changing shape of the technology detail, already there, ever more complex and sophisticated. However, look a bit deeper, and you’ll realize that much technology expresses only a few, simple, human concepts—or let’s instead say, much successful technology.

But what is success, really? Apart from anything else, a pragmatic indicator is that success generally means social acceptance. A technology can be “damned good” indeed, in the technical sense, seemingly deserving of instant recognition and adoption, yet fail abysmally. Why?

It’s not just a matter of blind luck which technology succeeds and which is forgotten. The phrase “ahead of its time” is sometimes used for innovative curiosities consigned to a dusty attic, but often the answer is that the social context at the time was just not ready to accept it. In some way, the innovation “broke the rules” and paid the price for being too far away from the accepted social norms. Other variations, although less efficient or poorer designs, adhered closer to the rules and were instead accepted by a sufficient majority. (Yes, this is a generalization that ignores abrasive and antisocial innovator personality, or poor business sense.)

In this view, technologies are shaped by the social rules, and perhaps more succinctly, the result can be recast as: Applied technology expresses the social rules that are accepted by those who design and implement it. While innovation is about change, nothing really changes without the broader acceptance—if the tension is too great between the new and the accepted, the new is ignored. It puts constraints on the kinds of change that can happen based on the technological aspects alone. What good is a new technology that’s deployed if nobody uses it?

Conversely, even old-fashioned or low technology can find new uses and become powerful catalysts for change under a new set of social rules. I believe that p2p technology to a large extent falls into this last category, notwithstanding the many innovative features a particular solution might showcase.

Internet and peer technologies are not exempt from social molding. Each significant group involved in developing and deploying the technology also shapes its design in its own image, so to speak. This book provides several examples, which is in part why the examined implementations are selected from both camps: open source and proprietary. Each has a different approach to the same basic p2p functionality, and the end result is also different in ways both obvious and subtle.

Social Criteria of P2P

We can outline the main social criteria common to most of the open source p2p solutions presented in this book—consider how technical decisions often seem based on at least implicit reference to one or more of the following:

  • Consent. Nothing happens without it.

  • Disclosure of information. Censorship or exclusiveness is contrary to the purposes of a p2p network. Open information is necessary to the informed consent of the individuals in their participation.

  • Common ownership. Content is shared freely across the network, not infrequently with a certain, shall we say, casual disregard for prevalent views on intellectual property ownership.

  • Empowerment. Individuals are in full control of most aspects of their participation; their degree of collaboration, which content to be shared, and in general the behavior of the software. They are responsible for and control their own local resources.

  • Cooperation without vulnerability. Individuals should be able to cooperate without fearing undesired exposure. The concept includes the possibility of full anonymity while still being able to verify a consistent source.

  • Distribution of storage or functionality. In many systems, network resources are spread across many collaborating nodes, thus becoming more clearly community resources than strictly individual ones.

These social criteria all represent an expression of the attitude and philosophy held by the designers, and are examples of software as culture.

Consider then for each of the criteria, how socially acceptable the view is among the general population of users. Consider also for each, how acceptable the view is for business or government. Consider finally how much these respective measures of acceptance might vary between groups and countries. (Even the process of software localization involves far more than simple translation of the user interface.)

It’s left as an exercise for the reader to formulate a corresponding list that characterizes the social criteria usually expressed by proprietary designs and therein to identify the main differences. For some, such considerations easily become overtly political issues, but for the purposes of this book, I won’t go there, only look at it from the less value-sensitive social perspective.

The Content Control Wars

Peer technologies became very (and visibly) controversial in the wake of Napster’s popularity. While a “new” battle rapidly evolved around music copyright and received media exposure, the battle lines already existed behind the scenes.

A sort of demarcation line has always existed between two different mindsets concerning intellectual property rights in general. On the one hand, the original academic setting of the Internet has long nurtured a strong “freedom of information” attitude that extended to free software and broad notions of fair use of otherwise copyright-restricted content.

Bit 11.3 The primary academic goal is to publish information openly.

The act of publishing information openly, subject to peer review, defines career advancement and is a prerequisite to funding success in the academic world.


On the other hand, the world of industry and commerce has constantly defended and strived to extend legal protection for the exclusive use of the protected product, content, or service—even to the point of it being counterproductive. It’s inevitable that there is considerable tension between these two diametrically opposed views.

Bit 11.4 The primary goal of business is to sell something exclusive.

Protection of exclusive and proprietary rights maximizes short-term profits.


Add to this volatile mixture the interests of state and police control of content, made complicated due to the international nature of the Internet and uncertainties about the applicability of national laws to this new medium. Although it’s still considered dubious whether one country’s laws on content are applicable to providers or users in another country with other legislation, not to mention the confusion when many different jurisdictions are involved, pilot cases have already been pursued to that effect—successfully in first instances.

Bit 11.5 The primary goal of government and law is to control and regulate.

This means control of people, control of money, control of information, and control of rights. The tough part is to balance between too little and too much.


Both commercial and state interests therefore have a natural tendency to work against the spread of p2p technologies that undermine the very notions of central control and content censorship. It would simplify the legislative tangles for them if free access to content could simply be made technically difficult or impossible.

Although we can describe actors on both sides of this conflict as loosely belonging to a pro-p2p or an anti-p2p movement, such a simple dichotomy is a convenience only at a fairly superficial level. The respective actors have their own specific agendas and are just as likely to be opposed to their allies on particular issues.

The prospective peer technology user however must understand that the mere choice to use open p2p networks is practically by definition a subversive act, at least in the view of much of the business world and most of the world of authority. That said, significant sections of both worlds still see far enough beyond these narrow constraints to realize that open peer technologies can be both profitable and good for the national interest. This makes the entire show so much more entertaining, if at times more than a little confusing.

All About the Money

A prominent result of the different social views that came to expression in Internet technologies is the split between server-side content producers and providers, and the more, shall we say, idealistic peer-to-peer groups. The battle lines have come to center on, predictably, money—or as expressed at one remove, ownership rights.

The FreeWeb FAQ (freeweb.sourceforge.net/faq.html) puts it this way:

Sadly, modern technology has created a situation where privacy and copyright protection are now in direct conflict.

Both cannot exist at the same time.

Through much contemplation, the developers of FreeWeb and Freenet have concluded that the right to privacy and anonymity are fundamental human rights which morally transcend the principle of private intellectual property ownership.

The developers of FreeWeb and Freenet staunchly believe that artists and content creators do deserve compensation for their efforts. However, the existing system has proven itself to be a failure. Of the revenues generated from sale and licensing of works, artists throughout history have only received the tiniest percentage. Also, history has shown that many incredible works have been universally rejected by publishers, only to win accolades much later, often after the creator's death.

The P2P community is keen to usher in a new system of reward for artists/ creators, and a new system of contribution for consumers. The internet, as it evolves towards offering a platform for truly safe, secure, convenient (and even anonymous) payments, will make such a system possible.

Even though this particular community has rejected the social rules of the opposing camp in this context, they are by no means radical anti-socialites and are also concerned at some level with money and due compensation. This is not the same as another vocal group they are sometimes confused with, the “we won’t pay” advocates of total freedom of software, content and bandwidth, at no cost.

The difference is that the main focus is the small-scale peer one—in this case, on the individual content creator, whom they feel is usually left out of the loop in the large-scale commercial models that are so intent upon preserving and extending content control for the aggregating content publisher/wholesaler.

It’s not necessary to agree with one or the other view of commerce for the purposes of this text; that discussion leads far afield into the “new economy” and the nature of money—interesting enough, but surely worth at least a book on its own. Simply register the fact that the divide does exist and to a great extent is rooted in just the opposing social views described earlier.

Use of Technology

Another focus of the content control wars is on the technology itself, often seen as being disruptive and dangerous by simply existing as it does, free of any centralized control. This issue too has both commercial and political aspects, though it most often is expressed in legislative terms in attempts to control or ban the technology.

With Gnutella, for example, it can be argued that it is like any other Internet protocol, each of which is just as capable of “encouraging unlawful” purposes in addition to perceived legitimate uses—true for practically any technology, no matter how simple or otherwise innocent. The reactive tendency to repress p2p technology has come to a head in the increasingly harsh anticrime and antiterrorist environment of later years. Like strong encryption technology before it, p2p thus experiences ever more attempts by authorities to “control” it—or failing that, outlaw it.

Strong encryption technology was initially patented and classed in the United States as nontransferable weapons technology to prevent export. However, due to a technicality in timing between publication and patent, the algorithms were not deemed patentable in foreign patent law, allowing the technology to be developed as open source in Europe. The restrictions have largely fallen by now, because strong public key encryption is legally and easily available, despite attempts to mandate other, less-secure technologies that allow authorities back-door decrypting.

Just as the growing public key infrastructure (PKI) is now an inescapable part of the emerging new Internet, aspects of peer technology are being structurally built into it as well. Chances are decently good that a decade from now, p2p applications will be as natural a part of the landscape as the server-client systems, and hardly anyone looking back will quite see what the fuss was about.

It’s an open question whether or not this will also mean an integrated global system for enforced control of content rights, according to an extended version of the regulation that the commercial interests are trying to erect now. It’s simply too early to tell under which terms society will accept the technology in the long term, and what the ensuing social and economical consequences will ultimately be.

Business vs. Academia

Another view of the ongoing battle describes it as being between the “content faction” who want total control over digital media and its use in any kind of digital device, and the “tech faction” who feel such efforts are misguided.

This control issue is also socio-economic at root; we can note that the content industries always refer to consumers, while tech industries refer to users. Recall that the impulse to empower users was at the very heart of the microcomputer revolution. This same techie impulse is at the heart of the p2p movement.

A desktop in every home gives each individual the kind of computing capacity that IT-managers once would have waived their stock options for. In a different context, this situation would surely have been seen as both dangerous and subversive, and undermining the interests of business wishing to sell processing to consumers. As it happened, a consumer mass market for retail processing never emerged, hence there was little objection from that quarter. Never mind that the PC-revolution turned into a unprecedented and profitable global industry, far larger than anything anyone could have envisioned—first in computers, then components and software, and now spilling into games and entertainment.

What’s to say the same can’t happen to an emerging p2p infrastructure, nurturing an entirely new form of virtual economy based on network-distributed resources and services? The slogan a peer-cluster in every home LAN comes to mind, and it might be the appropriate battle cry for some new visionary.

Businesses continually seek to erect entry barriers to protect themselves and their particular markets from competition, and intellectual property is no exception. For them, copyright and patent law are often seen as the best (or just most cost-efficient) barrier—essentially a monopoly granted and enforced by government. A strong case could be made that the original intents of patent (that is, limited exclusive rights to innovators in return for publishing and licensing) and copyright (as ownership of created content and the right to derived revenue) have been subverted in the interests of unlimited control and corporate greed, supplanting the original creator interests.

The business view’s desired future for digital content is quite simple: All digital content (such as books and music) will be tied to particular devices, and transfers between them will be difficult if licensed, or impossible if not. This tying down of content, by both legislation for digital content management (a euphemism for digital content control) and hardware implementation, actually ensures a far tighter control of content than for the traditional physical distribution forms that existed before digitized media. The goal is enforced scarcity to control revenue, which according to many is wrong in a digital content context.

John Dilmore of the Electronic Frontier Foundation wrote a lengthy analysis of the subject in his essay What’s Wrong With Copy Protection (see www.toad.com/gnu/whatswrong.html), and it’s worth reading. He answers the question in this way:

What is wrong is that we have invented the technology to eliminate scarcity, but we are deliberately throwing it away to benefit those who profit from scarcity.

Supporters of open source and open content believe on their side that only excellence in execution constitutes a real barrier to market entry, much like a law of nature. They believe it not merely as a principle of theory, but as a practical truth firmly grounded in the traditions of the academic world of public funding: Any project that doesn’t serve its constituency will either fail or be shut down. The artificial exclusiveness that much business seeks irrespective of excellence is therefore anathema.

On occasion, the proponents of this view have put their money where their mouth is, successfully generating revenue from value-added services based on freely distributed software, notably in the Linux and Open Source world. Such success leads us to believe that abundance economies are possible. John Gilmore thinks so (ibid):

I think we should embrace the era of plenty and work out how to mutually live in it. I think we should work on understanding how people can make a living by creating new things and providing services, rather than by restricting the duplication of existing things.

Returning to this book’s main focus, peer technology can be used for many things: collaboration, document sharing, distributed data, efficient distribution, and so on. The way it is implemented tends to assume that replication and distribution of existing data/content is open, unrestricted and free in the academic tradition—in fact, it proves very difficult and awkward to do otherwise.

Unfortunately, one very widespread use (and the only one in the case of the high-profile original Napster and its clones) was (and is) to share files with IPR-protected content in ways that are not allowed by the current “official” interpretation of content ownership laws. This casual disregard was provocative of course even to some of the content creators. It motivated the various commercial content interests groups to take notice when the usage becomes sufficiently commonplace, and it ultimately motivates them to seek to ban the technology completely if they can’t regulate (profit from) its usage.

The objection is really on principle because, despite the vocal complaints by the industry about lost revenue caused by music and film swapping (or for that matter, software so-called piracy by individuals), the revenue projections always make the assumption that any individual possessing an illicit copy would otherwise have bought the same content through normal commercial outlets. For the most part, it’s an unwarranted assumption, even ignoring the fact that many people download “illicit” digital copies of content they have already purchased but want in another, more convenient format. It also ignores the fact that the user might turn consumer by buying new content that they have previewed in downloaded format, something that reported increased sales of music CDs around university campuses would seem to bear out. Neither act is considered fair use by current IPR interpretations.

The issue is not an either-or situation, but one with infinite shades of gray.

Market Assimilation

The real issue lies elsewhere, however. It’s not the abstract, potential capability of the technology that matters but how people commonly use it, and less often considered, how the social norms govern such usage. No modern country would ban telephone technology today, despite the fact that it is easily used by criminals.

While Internet technology has been subjected to such bans in some parts of the world, it’s usually been partly due to the fact that it’s only been used by a minority. Besides, banning legitimate use rarely stops criminal use. More commonly, it creates an artificial advantage for criminal use because ordinary people can’t use it.

Forces are always at work that promote adaptive responses, even in the initially most adamant opponents. One of the more potent is the realization that there might be a market value in adopting a technology formerly seen as disruptive. Having assimilated the opposition, goes the thought, it can be possible to control as well. As an example of how it can work, I offer the following historic aside:

Does anyone remember when the tape recorder and VCR were both hotly disputed pieces of equipment?

For a time, it was a highly controversial issue to copy music from radio or records to tape, or movies from TV to tape, and play these as often as you wanted, whenever you wanted, for free! The issue wasn’t really about compensation to content creators and distributors, it was about control.

The cinema interest groups fought the very concept of home recording with tooth and claw, striving to reserve movies for the big screen and later for television pay-per-broadcast. Ways to ban the technology and later to prevent illegal copying were considered but found inadequate or too difficult to fully implement. So what happened? Copying to tape for personal use, long seen as fair use by the general population, eventually became officially accepted, even though governments tried to apply special copyright taxes on blank tape media at the request of the music and movie industries.

When the technology became commonplace enough, the content industry quietly adapted to the social consensus. Ubiquitous video rental stores and the rapid transition from cinema to rental media are a testimony to how the movie industry has changed its commercial model to fit the market. It now encourages everyone to own the technology, a VCR or DVD player, and for a pittance rent a legitimate copy of a movie at convenience.

Copying from TV became a nonissue, and from other tapes less interesting because rentals are so cheap. Even legally purchasing your own copy is no more expensive than buying a book. Movie production is bigger than ever. Even the DVD system of region blocking is gradually fading. However, so far the ban remains on decoder software that can give DVD-movie support to platforms other than Windows, as view the DeCSS mess.

The hope is that the music and movie industry will eventually want p2p technology to work for them, in much the same way. It should offer products and services attractive to consumers, instead of blindly attacking the technology and its supporters. This insight must have reached some quarters, as evidenced by the music industry’s sudden and rapid acquisition of music p2p technology companies during 2001, even as competitors’ lawsuits were still pending against the same companies in the courts.

Some details of the proposed music market changes still seem outrageous to the general public, such as limited playback of purchased music tracks, and inability to copy the tracks to other equipment that the buyer owns. Nevertheless, we can expect that aspect of commercialized p2p distribution to eventually mature under the pressure of the market. We hope.

As noted by John Gilmore in his essay (ibid), the copy prevention technology already being integrated into products on the market might before then leave the consumer-user in the single role of passive consumer. He asks

Being devil’s advocate for a moment, why should self-interested companies be permitted to shift the balance of fundamental liberties, risking free expression, free markets, scientific progress, consumer rights, societal stability, and the end of physical and informational want? Because somebody might be able to steal a song? That seems a rather flimsy excuse.

Bit 11.6 Resistance is futile. Prepare to be assimilated.

Even the most adamant resistance to new technology can vanish in an instant if there’s an advantage to be turned by adopting it faster than the competition.


The Legal Challenge

Legislated digital content control is a thorny issue, highly politicized and steeped in vested interests. In America, the current efforts have also come into conflict with constitutional rights and widely accepted norms of fair use. There, one sees that the traditional balance between the rights of creators, on the one hand, and the rights of freedom of speech and the press on the other is being lost.

When copyright legislation has been unilaterally extended, the public domain has shrunk correspondingly. The right of criticism, even the right to dispute someone else’s rendition of the truth, has been curtailed in practice, weakening the First Amendment’s almost absolute right to publish. The active term of copyright kept getting extended by new legislation in the late 1900s, one consequence being that few works created after 1910 have entered the public domain, as was the original intent of copyright expiry. Now the content control rights created by technological restrictions are not even designed to end—they are made permanent and retroactive for all situations and all forms of media, even those not yet thought of.

Naturally, this has consequences for peer technologies.

Control and P2P

The various hardware and software implementations of digital content control, and the ill-considered legislative framework behind them, are all still very much on the advance, and some of them can critically affect or stop some forms of p2p technology before the media industry might itself segue into the p2p resale business.

Blocking the likes of Napster, Scour, and recently MusicCity and its affiliates was ultimately possible because a single center of operations could be held accountable for the transgressions of the users. Failing direct compliance by the center, one could then go after the hosting Internet provider or upstream connectivity provider and have them shut down the offending server connectivity. Once the central server is killed, clients stop functioning, and the network falls apart.

In the case of truly distributed p2p, no such identifiable center exists. It is nearly impossible to find and stop each offending end user. Trying to automatically detect such users and disrupt their connectivity with denial of service attacks for example, as has been done, is in general opinion seen as an action more reprehensible than the acts it tries to stop. The response is on the same level and intelligence as lobbing cruise missiles at post office branches found to have stored mail with illegal or suspect content. Besides, distributed networks are very tolerant of nodes dropping out.

The logical next target for those trying to stem the tide of free file exchange is therefore the developer community. Suing individual developers, and the companies that develop p2p technology is being tried, but it is a viable option only if the target is small and inconsequential. It’s highly unlikely that you will ever see anyone try to sue Microsoft for the p2p sharing potential of .NET or Sun Microsystems for the potentially illegal actions that could be performed by users of JXTA. Promising innovations in p2p in smaller contexts however have been stopped by just the threat of legal actions, and venture capital made cautious about investing in p2p.

On the other hand, developers aren’t exactly fleeing in droves. “Developers must continue to build software that makes copyright obsolete,” is an expressed sentiment that’s perhaps an overstatement, and developers (and their patrons) understandably don’t feel comfortable about their ever more exposed position. As one posted:

It is known that a number of Gnutella users share files they are not entitled to share according to RIAA and other interest groups. Whether you are one of those users is absolutely none of my business. But as a Gnutella developer, it IS my business that I could possibly be held liable for this.

But it’s also true that technological advances always have a tendency to go beyond the bounds of legislation by providing functionality that the law formulators never envisioned. This is especially true of copyright when extended to digital content.

For some, it’s become a vital issue, as numerous infoanarchy.org postings show.

Nothing but the idea of owning information will stop information technology from developing—nor should it, since your freedom requires it.

Such advocates believe that Gnutella, or networks like it, will inevitably become a huge part of the Internet—either that, or the Internet in its current form will be extinguished. The latter is generally seen as untenable (and not even possible in the short term), despite ever more rigorous and restrictive legislation. Note in this context that government is making itself ever more dependent on public Internet access, even as it is part of the organized movement to close down the very openness it needs.

What might some of the new and proposed legislation mean? A proposed next step in general p2p file sharing might mean that the receiving user must request a file from a specific user who must explicitly agree to provide it and store an audit trail. Such a consent method provides an individual accountability, of sorts, as opposed to the largely anonymous seek and download nature of most file-sharing p2p applications out there. Consensual sharing is already implemented in some messaging applications that added file transfer, such as mIRC (an IRC client) and ICQ.

This assumes that such transfers are not made meaningless by hardware blocks, as mandated by present and future legislation for digital content control. And mutual consent doesn’t carry much weight if the transaction as such is made illicit. At its extreme, users might not be able to record or store any digital content at all, unless it is centrally authenticated that the user owns the copyright, or has purchased the (per-use) rights to replicate it—no hardware/software will exist that can do it otherwise.

The legislative backing for technology developments of this kind includes (in the U.S.) examples like the Audio Home Recording Act, the Digital Millennium Copyright Act, and the FCC ruling that it will be illegal to offer viewers the capability to record the new HDTV-format programs at all.

Hold the Laws, Please

Lawrence Lessig is a law professor at Harvard University and often profiled in media as a clear voice speaking out about the issues of intellectual property and freedom of information exchange. He encourages people to take a wait-and-see approach when it comes to legislation and p2p—as in a keynote speech at the O'Reilly P2P Conference:

Let's build it first. We can expect that there will be conflicts.

The question is whether we stay committed to that initial ideal … before we send in the lawyers.

U.S. (and European) legislators would do well to adopt such a moderate approach, especially considering that the Internet is a global domain, not one dominated by a single nation or single culture, nor to be regulated by the laws of any one nation.

Bit 11.7 Open source is firmly in the build-now, regulate-later camp.

The main advantage to this approach is that at least one then knows what, exactly, one is trying to regulate, and if there’s any point in doing so.


But then as we’ve seen, it’s very much a question of short-sighted business interests pushing the implementation of such laws—and to a great extent, against the common sense attitudes of the general population. That doesn’t mean legal protection isn’t necessary even in what’s termed cyberspace—the totally free and heady frontier days are over, and the Internet must become a safe place to visit and do real business in. That means serious community building, not empire building.

Even if the laws are not enacted, unfortunately, we still have the private agreements by manufacturers to embed copy-prevention technology in commodity media components. Examples of such efforts are SDMI and CPRM/CPPM submitted as part of new recording standards. Already, DAT and MiniDisc recorders treat all analog input as if it contains copyrighted materials that the user has no rights to.

While DVD recorders might be marketed for home movie editing and storage and digital photo storage, it seems clear that they will be internally blocked from the common VCR usages of time-shifting television recordings and recording streaming video from the Internet. Because manufacturers who attempt to offer more open functionality are sued, for example, for circumventing “copy protection” under the DCMA, competing noncompliant products are rapidly forced out of the market. Hence, the pragmatic “you have the right to record a copy of what you have the right to see” ruling (the Betamax case) defended by existing law is made irrelevant by largely undisclosed technological blocks.

Whether or not one agrees that manufacturers have the right to arbitrarily cripple the functionality of their own products, one must deplore the way the dominant players are redefining the global playing field for everybody.

Micropayment Solution

Seen from another angle, much of the current problem from free digital content (interpreted as illegal copyright infringement) may stem from the simple and sad fact that a global payment system for the Internet, although long envisioned and awaited, has not yet materialized. If IPR interests get paid, then surely they’d be happy?

The many digital cash and e-wallet projects and proposals, all filled with promise, were ultimately left to fade away as footnotes. Why? The detailed reasons are many—often the lack of standards or transparent interoperability is blamed—but the fundamental reasons come to just two:

  • Lack of user convenience, which in this case can be rephrased as a lack of a ubiquitous and transparent payment infrastructure for e-commerce.

  • Lack of user confidence and trust, which is not solved just by making the system secure and trustworthy in the technical sense.

A solution that satisfied both of these requirements would quickly become an ad hoc standard, as other players quickly adopted it to have a share of the growing pie. As it is, we’re stumbling along with a mix of sort-of-workable payment solutions in a sea of dubious advertising and user-free content.

The most accepted and common payment scheme on the Internet today is still the credit card details typed into a Web form, hopefully on a secure protocol server (using HTTPS). This form of transaction—which, by the way, is safer than using the same card physically in shops and restaurants—relies heavily on a really remarkable amount of consumer trust in the system and on the way the common cards are accepted around the world as a consumer convenient form of cashless payment. The global credit card infrastructure, already in place, made card purchases naturally translate well into the borderless Internet context. It’s also relatively easy to implement over Web servers, from the technical point of view.

The major objections to credit card payment are that it’s less convenient for the merchants (who pay the transaction costs), and that the current routing through banks makes card payment not feasible for smaller amounts. This also means that you have to have (and afford) a special merchant’s account and a minimum volume to process card payments. Individuals with irregular sales, such as shareware authors, can use payment services such as SwReg.org for a cut of profits—typically a dollar fee plus 5 percent of the sale, but even offering “micro-commissions” for low-price items. With a stated annual sales at the $12 million mark, it’s easy to see how such an established organization can meet the overhead of card processing.

In short, the state of Internet payments is at about the level of coin- or card-operated public telephones, except that the minimum charge is ridiculously high for that context. Returning to this telephony analogy to p2p makes sense even in the realm of e-payments. Telephony charges are accumulated as microcharges against a subscriber and billed periodically, like any consumer utility. User decisions are about higher service abstractions, such as which operator has the best rates and how high a periodic bill one can afford, not individual call charge blips or particular call-service fees.

Most operators work this way, and almost everyone accepts it.

Although the telephony network introduced and retains a per-minute style of charging for connectivity and services, it is less applicable to the packet-based kind of virtual connectivity that characterizes the Internet. The model is even becoming less relevant to telephony as its infrastructure also becomes more packet-based and service-subscriber oriented.

The Internet unfortunately lacks a metering and billing infrastructure for payments (disregarding the dial-up ISP by-minute model, which is essentially telephony in any case), so the closest thing now being deployed by the larger content providers is the fixed subscription model, monthly or yearly. Unfortunately, this addresses the revenue issue only from the provider’s point of view. Subscriptions make sense only when people regularly rely on a single provider (or a handful): telephone operators, newspaper or magazine publishers, commuter cards, and so on. The advantage of the model lies solely in aggregating theoretical small fees into periodic sums that can be handled by traditional, out-of-band payment mechanisms (card, check, bank deposit, or transfer) and simplifying accounting by bundling access into all-or-nothing.

For freely roving Internet users, on the other hand, subscriptions are site obstacles, blocking their easy access to content. You can’t hyperlink directly to subscription content, and search engines can’t index it. The visitor is instead redirected to the entrance portal page to log in first. Session memory, cookie tracking, and recent, more intelligent login redirection to the original URL admittedly help usability there, but the site as such remains opaque to the outside world.

The main user objection to subscriptions is that the typical monthly fees for each site add up alarmingly fast, given the broad range of sites a typical user wants to visit. This model is roughly equivalent to telephone users being required to pay a monthly subscription fee for each and every telephone number they want to access. Put that way, it’s clearly not a convenient solution, nor is it a viable one in the long run.

Subscriptions might make sense in a p2p environment, where the fee buys you access to the entire network for a particular time, but even here it doesn’t solve the real issue of e-commerce: how to make the payments in the first place with minimal transaction costs. This issue ultimately makes the user look elsewhere. Out-of-band systems like Paypal (www.paypal.com) are only a short-term palliative.

Bit 11.8 Content subscriptions per site are not a viable e-revenue solution.

Let this stand for a prediction in the face of the projected rapid adoption of subscription solutions on the Web in 2002. As with the click-through advertisement funding it replaces, the subscription model must founder in the end when the expected flow of revenue devolves from a disappointing trickle to ever nearer zero.


But seriously, and allowing for the idea that digital content should have a price, the only realistic and lasting solution to some form of revenue model for content on the Internet has to be a very low pay-per-use model built into the infrastructure—in a word, a micropayment technology as transparent as telephony rates for phone users. Dial-up Internet users don’t stop browsing the Web just because the activity is accruing charges to the operator or ISP by the minute. Neither should they conceivably object to similar sums going straight to the owners of the content they browse, as long as they aren’t forced to do the detailed accounting.

Bit 11.9 Viable network agency will presume a micropayment infrastructure.

Let this be a longer-term prediction, that really useful p2p agency won’t deploy until micropayment-based, scalable usage of distributed resources flies.


Given a decent infrastructure, micropayments make excellent sense because a characteristic of digital media is that the transaction costs can be made arbitrarily small, so even ridiculously small fees per item can be handled with no loss. The user-consumers see periodic sums charged to a real-world cash account; the user-providers see accrued sums deposited to their respective accounts. The micropayment system does the math either way, handles seamless transactions, and can cough up the item-specified lists for inspection on demand. The main design goal is to avoid needless detail for the user on either side of the transaction. Real microeconomics could do the same for Internet resources that virtual micropayment systems do for p2p resources, manage and scale them transparently on demand.

Free and Legal

The preceding discussion might give the impression that we’ll have to pay and pay, but legal for-free alternatives keep turning up even in such a contentious area as the sharing of music files. A couple of recent examples are FurtherNet and RootNode, which might be seen as p2p versions of live-recording distribution channels such as Etree. Some artists also choose to distribute their music themselves, and releasing recordings for free trading is seen as a way of driving interest for their works.

Etree (www.etree.org) is a community of FTP servers that host and distribute lossless digital audio files (not degraded MP3 format) across the Internet. It’s important to realize that Etree distributes concert recordings of bands that explicitly allow this. There is both a considerable amount of such legal material and a large community of music lovers that trade legal recordings. The primary focus is for users to be able to burn their own audio CDs with DAT-quality recordings.

FurthurNet (www.furthurnet.com) is a decentralized peer technology, called the first noncommercial p2p network for trading legal live music, and was created by fans for fans, with much support from members of Etree and Sugarmegs music-sharing communities. The client software enforces the sharing of only legal content by limiting search and sharing to a preselected list of bands. Music files are summarized in “sets” that make it easy to get complete collections by particular artists. Multisource download is supported, and reviews of the open source client developed in Java are favorable.

RootNode Live (RNL, www.rootnode.org) is another decentralized p2p network, descended from Gnutella. RNL is utilized for the legal, reliable trading of live concert recordings. It is open source, and rootnode.org itself is a music-magazine site made by some students from Georgia Tech. Running the client requires a registered account with the main RNL Web site, which provides a form of central accountability even in the serverless environment of the client. (Interestingly enough, Georgia Tech was a hotbed of person-to-person server technology in the form of the open Swiki, as detailed in my previous book, The Wiki Way, and the site encourages registered members to contribute to the content.)

Some concern has been voiced in the general p2p community that by moving into self-proclaimed “legal only” networks, these users are weakening the position of the free, general-content networks. This presumably means that the critics feel that by keeping a focus on a narrower segment of p2p file sharing, these users are unlikely to protest against new legislation, or the disruption and shutdown of the other networks, until too late. There is some historical precedent for that view.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.147.252