Security Issues

We start with security issues, because a clear grasp of them and how a particular peer technology is secured is a prerequisite to analyzing legal ramifications.

Peer technologies must be considered “security porous” in the sense that they inherently provide great individual freedom for the user and are designed to easily establish direct connections between peers. In short, p2p clients have a (nasty for controlled IT policy) tendency to blithely ignore or actively circumvent traditional security measures such as corporate firewalls or filters. Information can thus cross otherwise established boundaries “without permission, assistance, or knowledge of any central authority or support groups”, as one corporate vendor puts it.

The Dimensions of Security

Security in peer technologies has a number of dimensions, and should in this discussion also be extended to include techniques for ensuring availability and preventing illicit modification of content.

We can examine these issues in terms of the same list of characteristics defined in Chapter 2, which is used later to summarize characteristics of the examined implementations. This analysis looks slightly different, depending on the particular application focus of the p2p technology, as discussed in Chapter 3. Each focus might require a tighter security implementation for some characteristics, yet at the same time, it may allow very relaxed constraints on others.

Communication and Security

In communication contexts, most commonly messaging (see Chapter 6), the primary focus is on the personal identity of the user. Table 4.1 summarizes some of the specific security issues raised, along with possible solutions.

When selecting an IM technology, it seems reasonable to examine it for the kind of identity security it can provide, yet this issue often appears to be neglected even in enterprise IM environments. With few or no safeguards, the risk is great that identity misrepresentation can occur with potentially grave consequences.

Many established IM systems paradoxically don’t really care who you are, only that you provide a unique identifier or “handle” to which other users can consistently refer. Your real “identity” is then implied in the social context of the people you have conversations with—a more or less trusted identity depending on how well they know you and can determine that you are who you claim to be.

Table 4.1. Summary of security issues in p2p communication contexts
Issue Problem Possible solution
Identity Authentication, as many p2p clients don’t care or just assume you are who you say you are. External mechanisms (machine, client, intranet, service login).

Selection of client that cares.

Identity token, digital signature.
Presence Not always valid for user; indicates online client. Personal integrity issues. Better user selection, automatic inactivity-detection by client. User-selectable invisibility, etc.
Roster Integrity issues. Ability to ensure roster privacy, exclusion from others’ lists.
Agency Proper delegation of authority. Stronger authentication measures.
Browsing Exposure of local data, risk for intrusion. User consent, authentication, and filters required.
Architecture Vulnerabilities, unintentional exposure of local systems. Open source, rigorous testing, external safeguards such as firewalls, clear usage policies.
Protocol Vulnerabilities, lack of interoperability. Open source, clear intent for interoperable platforms.

In some cases, the identity handle is one or more permanently server-registered identities or tokens tied to a particular client or client configuration. In others, it’s an informal, per-session or per-peergroup-defined uniqueness, adequate for the closed group but of little value elsewhere.

Bit 4.1 Popular IM systems “track personal identity” but don’t actually care about a user’s real identity or authentication proof-of-use.

Users concerned with such matters must use external means and tools to accredit themselves with recipients, such as trusted public keys and digital signatures.


Casual anonymity is then easy in IM, if perhaps a dangerous illusion in the face of someone determined to track you down. This last issue is explored in detail in the context of distributed technologies that attempt to provide a secure anonymity for its users—for example Freenet, discussed in Chapter 9.

As agency in various forms becomes more widespread, autonomous agents deployed by a user client will more often be found acting on behalf of the user, and thus the issue of proper authentication and delegation must drive a demand for more secure identification to be implemented. E-commerce also requires authentication.

Bit 4.2 Both agency and e-commerce require secure authentication.

To be resolved is whether the authentication mechanism will be tightly centralized (like .NET Passport) or distributed trust systems (such as proposed by JXTA).


Authentication of identity in p2p contexts can be any one of the following, in increasing degrees of formal security:

  1. Nonexistent (the system accepts whatever you say at face value).

  2. Implicit (you can run an identifiable instance of the client software).

  3. Based on local machine, client, or LAN log-in (local password access).

  4. Login to a p2p server/service (p2p network authenticated access).

  5. External secure authentication mediated by the p2p service (a central authentication service with encrypted signature, such as Passport).

While still rare, network or external “public” authentication based on public key digital signatures is being deployed in newer solutions with increasing frequency. The solution that might tip the balance to the requirement to always use such strong authentication, even in casual IM, is Windows Messenger (see Chapter 2) with its reliance on centralized .NET Passport server authentication.

Bit 4.3 Full-featured IM clients expose far more than the user assumes.

It’s important to be familiar with all the configuration options of an IM client—also with what the extra features might mean for the different security and integrity issues.


A second major problem with casual deployment of IM clients is the degree of “ exposure” they entail. This exposure can be on different levels; a threat to personal integrity, data integrity, or corporate integrity. Through the client, a particular IM system might expose the individual’s personal detail, behavior, and private roster.

Some implementations or retrofits to automated presence can be used to monitor employee work and Internet habits. The client might gratuitously expose sensitive data from the client system or details of the client-side LAN to any external party that cares to look. The client software and its features might offer an unsuspected route through an otherwise adequate firewall for malicious intrusion.

The measures that can be taken to limit such client-mediated exposure depend on external policies and careful selection, and above all on sensible configuration of client software. In a general sense, while proprietary solutions might seem adequately safe, they are seldom described in sufficient detail for an informed risk evaluation. They can often include undesirable spyware components for advertising, and at the very least, they introduce unwanted dependencies on third-party servers.

Sharing and Security

In the sharing context, the focus is less on who you are; the primary interest is the (usually static) content that can be retrieved through the network. This doesn’t mean that all p2p sharing-networks consider identity issues such as authentication irrelevant, but it’s dictated more by external factors and the kind of network.

Bit 4.4 Common p2p file sharing systems tend to ignore user identity.

This is not necessarily a disadvantage or failing but depends entirely on context. In some cases, you might even wish for deliberate and secure user anonymity.


Table 4.2 summarizes some of the issues for content sharing, which are different if only because the sharing context is rarely concerned with personal presence. The issue of authenticated data identity, discussed in Chapter 3, is a more serious issue that can safeguard against downloading malicious code disguised as content.

Individual identity, or authentication of individuals, becomes important in contexts where the sharing community is closed in some way, or when sharing requires explicit person-to-person consent. Otherwise, the focus is on the client/host role and possibly on implementing a trust system based on node identity to deal with unreliable, disruptive, or malicious nodes.

As a rule, in sharing, one assumes that the client software is in the active role, ideally an autonomous participant in the network no matter what the user is doing. Content access is then an anonymous and automatic process, where the client fulfills requests from all comers silently in the background.

Table 4.2. Summary of security issues in p2p sharing contexts
Issue Problem Possible solution
Identity (user or data) Authority to share, but many clients don’t care. Malicious nodes. External mechanisms (as with IM), digital certificates. Trust systems for node identity.
Presence Online node status only. Not really relevant unless retrieval consent is required.
Roster Associating specific content with nodes. Specific sharing policies.
Agency Little or no control of how content is shared or resources used. Closed p2p networks, node IP block filtering, clear sharing policies, consent dialog.
Browsing Open, autoscan for files. As before, configuration issue.
Architecture Vulnerabilities, unintentional exposure of local systems. Open source, rigorous testing, external safeguards such as firewalls, clear usage policies.
Protocol Vulnerabilities, lack of interoperability. Open source, clear intent for interoperable platforms.

This process is natural enough if the focus is solely on stored content, without any special concerns for who can access it. The casual approach to identity can cause problems if the context for the deployed p2p network requires user authentication, although this vetting can usually be left to other general security mechanisms already in place, such as machine or intranet login.

It’s common corporate practice that anyone trusted enough to have normal access to the physical local network is by default trusted to access its resources and content, and thus presumably trusted enough to access content stored in a p2p virtual network deployed on it. The situation becomes more complex when “outside” access is allowed, for instance from remote machines across the Internet. Were the p2p layer just accessed through the existing LAN, the usual authentication mechanisms that allow the remote user to access the physical intranet would be adequate to authenticate access to the peer network as well.

On the other hand, the p2p technologies discussed here deploy their own virtual connectivity layer, an independent network. Outside clients can directly contact inside ones, blithely ignoring the intranet authentication mechanisms, and typically crossing the firewall barrier with impunity. This last issue deserves special attention.

Firewalls and Tunnels

When a corporate intranet connects to the Internet—and this applies to many other local and even home LANs, for that matter—it’s almost a certainty these days that a firewall shields the internal network from intrusions. This is in addition to the usual network address translation that multiplexes a single external Internet IP address to many LAN identities.

Most p2p clients and protocols have evolved ways to work even behind firewalls, so the existence of a secure firewall does not by itself prohibit the virtual p2p network from spanning across this barrier. It’s usually easy to set up a client inside the secure zone that will happily join a p2p network on the general Internet. The user often won’t even realize that his deployment of the client provides a tunnel into the secure zone and constitutes a serious transgression of firewall policy.

Bit 4.5 P2P deployment can easily mean a breach of firewall security.

For example, most instant messaging technologies work around firewalls to provide virtual direct access to the client system, these days even allowing unregulated background file transfers that would otherwise not be possible.


In the absence of specific p2p policy and means of enforcement, sensitive content considered secure behind a firewall can unwittingly become easily accessible to external, unauthorized individuals who are on the common p2p network. In many cases, the security breach is bidirectional through this tunnel effect.

Bit 4.6 “Clever” clients can unwittingly subvert proxy/firewall protection.

This means that some software is coded with hidden “optimizing” features, some of which can compromise explicit security settings.


Illustrative of proxy subversion is the ubiquitous MS IE (v5+) Web client, not p2p to be sure. It has at least one little-known quirk concerning its (rather convoluted) settings for proxy use. IE will blatantly ignore these settings if it can determine that a direct (as it sees it, “optimal”) connection is possible. Thus, it’s possible for a home-firewall user to be firewall/filter protected only on the first outgoing connection attempt of a session—IE bypasses the proxy on all subsequent connections.

Convenience Lapses

One convenience feature to be careful of is the automatic scan of the hard disk for files to share, along with the setting to recursively scan folders. It’s easy to share too much. Optional scan (default off), clear settings and indications, and the ability to specify “safe” file types and exclude categories are some of the desired control options.

A feature in some client implementations that’s a potential security risk is the automatic client upgrade, where the software can detect when newer versions or new components are available on the network (or from an internally specified server) and automatically initiate a user-transparent fetch and upgrade process. An unattended p2p client can thus continually “evolve” and automatically restart itself as the latest release version without any user interaction. This is not exclusive to p2p because many other types of Windows software have similar auto-upgrade functionality.

This feature is undeniably a valuable convenience when client software is rapidly evolving, ensuring that everyone quickly gets the latest versions without reinstalling. However, the process also restarts the client so that the new version can be executed in its place, which disrupts connectivity. In many cases, it means an entirely new network topology is seen by the peer application after it rejoins the network. A related risk, especially on an untended system, is when an upgrade turns out to be unstable. It’s little consolation to the user of a crashed unattended system, that on occasions when this happens, an emergency upgrade is quickly released.

You should note that the setting for detect and upgrade is typically enabled by default. A related setting, sometimes invisible, is how often the client checks for new versions. At best, both can be user configured, though often hard to find.

More seriously, the upgrade process could be subverted so that a client is tricked into fetching and executing a rogue program instead of the usual upgrader. The rogue program can do all manner of malicious activities from the machine it’s running on, having bypassed virus and permission checks because it’s a child process to the client.

This kind of piecemeal application-internalized updating and its associated risks are made unnecessary when the update mechanism is integrated with the operating system, such as in FreeBSD Unix. Similar schemes are being developed in Linux. With Debian GNU/Linux, for example, the user can upgrade all installed software with the latest versions by entering a single command (“apt-get upgrade”), or optionally upgrade just specified applications or groups. The packages are fetched from trusted FTP servers all over the world, according to the stability criteria set by the user: stable (well-tested), testing, or unstable (bleeding-edge). Nothing like this is available for Windows (yet), despite Windows Update and third party auto-upgraders.

Subversive P2P

Some p2p solutions are intentionally designed with more than a little bit of “subversion” in mind. Although such a design might seem attractive for other reasons (for example, encrypted content storage or protection against content manipulation), it’s an aspect of the solution that carries some risks.

Above all, adoption of such a technology in the wrong setting, such as in enterprise, can prove both ill-advised and embarrassing if management demands central control of content and clear audit trails.

Some p2p developers believe that once information is digitized, it is free for everyone and must never be censored. Therefore, they make their p2p designs profoundly reflect that view. Such a technology solution can make it impossible to apply any serious content management to make the deployment compliant with overall corporate network policy.

Other technologies go to great lengths to ensure anonymity of both ends of a node conversation. But anonymity might be inappropriate for contexts where clear sender-receiver trails are necessary for external reasons. Anonymity is discussed later in this chapter, while implementation examples are found in Chapter 9.

Redundancy and Persistence

We tend to assume that stored content remains available forever, despite experiences to the contrary when personal PCs crash or corporate servers go down. Backups can restore lost content—usually—and more often in the latter case. But that’s with centralized content management (by user or IT department).

What of p2p content? That depends on the technology and network. In simple atomistic p2p, like Gnutella, random duplication of content covers most of the slack in backup through sheer redundancy—the more critical issue is finding nodes online with the desired content and the free capacity to transfer it.

Distributed storage generally has built-in replication and error-correction functionality, which also compensates for a lack of formal backups. Some storage solutions provide automatic replication based on demand, which not only improves performance, but also ensures greater redundant availability.

Explicit content management is usually not an option, however. Systems like Freenet even automate purging of content by dropping content that isn’t requested often enough. This kind of management is totally agnostic of content, based solely on demand, and clearly not suitable for general-purpose archival purposes.

Other systems might prioritize total persistence but show unacceptable retrieval response for everyday use. An analogue would be to compare online storage (on hard disk) with offline storage (such as on tape or CD/DVD). The former is comparatively fast but somewhat risky, and often-used data is given priority in the constrained capacity. The latter is slower and must be manually mounted to access, but storage capacity is essentially unlimited. In addition, offline-stored information is far more secure in terms of persistence. Offline storage media are therefore suitable for permanent archival purposes. (We ignore for the moment that no storage media is guaranteed persistent. Even CD data can be eaten to oblivion by bacteria.)

As with every other technology, p2p is fraught with compromise and trade-off. It’s well worth the extra effort to explore these issues before committing large amounts of content to a particular technology.

Bit 4.7 When considering p2p, think of leveraging existing resources and storage more than migrating to an entirely new infrastructure.

P2P is more about the process of communicating and moving data, notwithstanding that it offers interesting innovations for distributed, adaptive storage.


Perhaps the most attractive solution in most contexts is one that can leverage existing storage, giving an edge to some of the simpler implementations that focus only on making the content accessible. Building on existing infrastructure allows great flexibility in deployment without necessitating major restructuring.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.217.228