Implementation Overview

For a detailed look at practical p2p technology, let’s turn the focus to a selection of implementations that are in development or currently available for networks.

These are selected not only as practical solutions for different peer-suitable situations but also as illustrations of significantly different approaches and feature sets. The headings of the chapters in Part II indicate what could be called the main aspect of each implementation type, although application areas overlap considerably. The aspects chosen are messaging, file sharing, resource and content management, secure distributed storage, and collaborative frameworks.

Table 3.1 lists the different implementations examined in detail in this book, with brief mention of the characteristics particular to each. Many more implementations exist—in fact, each time I researched something in the course of writing these chapters, mention of yet another “new p2p” technology turned up. Trying to include them all (or even just a representative mention) seemed doomed to become a Sisyphean labor of endless rewrites chasing implementations and versions. It was bad enough as it was.

The practical course therefore was to focus on a selection of distinctive peer traits for different contexts, and good implementation examples to discuss them in, and thus provide a deeper understanding of common p2p characteristics that we can find time and again in many different implementations.

Particular Focus

Seen from an abstract perspective, a technology can approach the whole concept of peer-to-peer in various ways. The chosen approach or focus area determines the solutions used and the overall potential of the technology.

Application Software or Infrastructure

As the examples in Part II show, a p2p technology can focus on specific applications (or clients) for particular purposes or on an entire network infrastructure to support arbitrary applications.

It’s in the nature of things that the client focus, where the network arises ad hoc from the chosen client-to-client protocol, was both the earliest form and often the most limited. On the other hand, such applications have proven remarkably useful, even robust, in contexts much larger or different than the original intent.

Table 3.1. Major p2p implementations examined in detail in this book
Name and domain Chapter Type of technology Comments
ICQ (and AIM) icq.com (aol.com) 6 Proprietary IM protocol. Instant messaging.
Jabber jabber.org 6 XML generic protocol. Messaging, IRC, file transfer, and more. Server-mediated user-centric directory. Any client application.
Napster napster.com 7 Proprietary file sharing. Historic example.
Gnutella

gnutelliums.com

gnutellanews.com
7 HTTP-based protocol. Usually implemented as file search and transfer. Fully atomistic, any client application. Each peer is responsible for its own shared files.
Mojo Nation (version discontinued in 2002) 8 Encrypted over TCP/IP. Distributed file sharing. Atomistic. Demand-driven micropayment services, distributed swarm storage.
Swarmcast swarmcast.com 8 Distributed swarm file broadcast on demand. Server mediated swarm broadcasting of requested content.
Freenet freenetproject.org 9 Distributed secure document publishing and retrieval. Atomistic. Encrypted, anonymized, distributed storage.
Groove groove.net 10 Shared workspace collaboration. Server-mediated.
JXTA jxta.org 10 Distributed computing in general, protocol development. Services and applications for peer groups, collaboration.

Some have cast the evolution of the Internet in discrete phases—for example, “Internet 1.0”, “Internet 2.0”, and “Internet 3.0”, similar to the way software versions are numbered. In this view, v1.0 was the original server-to-server p2p model, while v2.0 stands for the client-server model of the Web today. Thus, Internet 3.0 stands for the new infrastructure based on p2p networks and ubiquitous connectivity for both devices and persons. This version is the Internet that the infrastructure solutions are trying to build, by making peer connectivity not only natural and easy, but inevitable and an essential part of the functionality.

Identity, Content, or Resources

The issue of primary focus or of the actual target of the implementation raises some questions. Is the focus on individual user identity, important in messaging? Does it instead identify specific content, perhaps even making the user anonymous in favor of file identity? Or does it mainly concern the distributed network and its aggregate resources, so that individual users and clients are interchangeable?

Each of these options results in different types of p2p technologies with different feature sets, different strengths, and different weaknesses. This is true no matter whether the focus is on application or infrastructure, even though the latter leaves greater latitude for application-specific focus to vary according to context.

Security

The focus might also be on security aspects. Early p2p solutions, like the early Internet solutions, were conceived in an inherently trusting environment. Any security was essentially external, so that anyone who was trusted with access to a computer, was also trusted to access content.

The modern e-world is far less trusting, and the threats to individuals and content far more serious and imminent. Over time, even the older solutions have often been retrofit with various measures to combat spoofing, denial of service, and malicious hacking. These measures may be adequate for individuals in non-critical messaging, just as plain-text e-mail is still widely accepted despite the wide-open e-postcard model. In closed environments, unsecured peer communication has not been seen as a problem either, even in enterprise, although this view is changing.

A side issue is that other network security measures, such as firewalls, can block p2p communications from outside the protection. Most peer technologies manage to work around this problem, as long as at least one end of a connection can receive.

However, the modern peer technologies implement far stronger security as a matter of integral design to safeguard both identity and content, usually based on variants of hash algorithms, strong encryption, public key signatures, and node trust systems. This level of security might well seem overkill for many users. It does make the solution adaptable to virtually any situation, however, from individual to enterprise, and it confers the ability to securely and consistently authenticate content sources—even anonymous ones.

It’s no coincidence that the more advanced solutions discussed in later chapters rely on these stronger encryption methods, and that even the simpler, narrow focus messaging and file-sharing implementations are adopting similar strategies in later versions of their software and protocols.

In e-mail and Web contexts, it’s sometimes a complaint that the use of strong encryption and digital signatures is both unwieldy and awkward. Implementations vary, and it’s true that the security features are seldom convenient or transparent to the user. Nor are they necessarily interoperable between sites or contexts (secure HTTPS server connections are a notable exception). This sad state of affairs seems unlikely to be the case with future p2p platforms.

Bit 3.1 Strong encryption will be a ubiquitous feature of p2p systems.

This is inevitable because of the boost it gives to system robustness and utility.


The real issue with p2p security appears to be whether the authentication services will be tightly bound to central servers, as in .NET Passport, or whether they will be implemented as distributed trust systems, as in public key exchanges or other trust-based infrastructures.

Dynamic or Persistent

Some peer technologies are predominantly designed to allow great node freedom. In fact, the point is made earlier (in Chapter 1) that transient connectivity is a significant characteristic of the current p2p paradigm.

Many of the solutions described implement various strategies to deal with the issues that dynamic connectivity gives rise to: identity addressing, event journalizing, maintaining network cohesion, fault tolerance, and so on. They do so with varying degrees of success in terms of performance or convenience to the user—think search and retrieval in Gnutella, for example, as discussed later.

On the other hand, other solutions are not especially tolerant of transient-node connectivity, because their focus is on resolving other network issues, such as persistent storage or distributed resource management. To some extent, one can then see nodes in these systems in terms of coupled client-server applications on a single host machine, where the server component is expected to be more or less continuously connected to the network, with much more relaxed constraints on the client component that interfaces with the user as a result.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.247.125