Enterprise Goes Peer

It seems likely that much enterprise functionality will migrate to p2p applications over time, as the technologies mature and are offered by multiple sources. While client-server model infrastructure will hardly disappear, at least some “mission critical” applications are destined to become peer distributed.

Analysts suggest that some server-based functionality should be decentralized sooner rather than later, and it’s important for enterprise to figure out which they are in any given environment and what kind of p2p implementation to use. Tasks to be inventoried and scrutinized for such migration include critical problems in networking, communication, distribution, content sharing, collaborative work, community building, and information management.

Ideally, we want to see some practical user stories from the field. Trying to get an overview of peer technologies used in enterprise proved surprisingly difficult, and it is only partly due to the imprecise way “p2p” has been used as a descriptive term by various technology vendors and their customers.

In general, enterprise concerns about interoperability, security, performance, management, and privacy have seriously hampered the rapid adoption and implementation of new technologies. This is especially true for peer technologies that inherently deemphasize or even eliminate central control.

The easiest “peer” technology to clearly document based on case studies turned out to be a category I had largely excluded from this book, because practical implementations usually show little or no communication between peers. This is distributed processing outsourced to idle PC capacity in a network. DP resource sharing is most often entirely managed from central servers, and the distributed clients communicate exclusively with them, never their PC peers.

Other groups that have been documented in peer contexts turn out to be based on solutions for distributed resources, application logic, and storage. Again, in the enterprise context, control and management turn out to be centralized, with little or no control at the peer level. This fact makes these cases relatively uninteresting from the point of view of this book. At best, we see p2p communication between distributed central servers in an otherwise markedly server-client architecture.

That said, significant activity and experimental deployments have brought closer potential widespread adoption of different p2p solutions that will affect both corporate and individual users. In particular, work on standards and infrastructure, along with the heavy commitment of Microsoft to .NET, Sun to JXTA, and others, practically ensure that some forms of p2p will become as ubiquitous as e-mail, messaging and the Web are today.

This section therefore starts by examining a few of the enterprise-oriented efforts and moves on to some vendor solutions for enterprise.

Intel and P2PWG

Through the Peer-to-Peer Working Group ( P2PWG, www.peer-to-peerwg.org), Intel is attempting to help set the standards for p2p computing applications and platforms. The P2PWG is a consortium for advancement of best practices for p2p computing that includes many major PC vendors. Note that P2PWG explicitly defines p2p to encompass desktop systems (from the Web site):

Put simply, peer-to-peer computing is the sharing of computer resources and services by direct exchange between systems. These resources and services include the exchange of information, processing cycles, cache storage, and disk storage for files. Peer-to-peer computing takes advantage of existing desktop computing power and networking connectivity, allowing economical clients to leverage their collective power to benefit the entire enterprise.

Intel has been pushing fairly hard for a broad adoption of peer technologies for some time. This effort may be seen as a result of the success of the company’s distributed processing effort, NetBatch, used in-house for over a decade. Not only did this technology increase Intel’s average aggregate usage of processor capacity from 35 to 85 percent in its internal network, it saved the company hundreds of millions of dollars by allowing faster chip validation tests.

Intel is therefore advocating the adoption of standards so that the future Internet, seen as a mostly PC-based architecture, can become pervasive very quickly. A side-effect would be to accelerate the adoption of more powerful (and newer) client and next-generation devices. This in turn would naturally promote demand for Intel computer chips, and some of the distributed processing solutions it has developed.

The Peer-to-Peer Trusted Library ( PtPTL), launched in February 2001, is one Intel-sponsored effort to provide the secure infrastructure components that business requires to confidently deploy true p2p solutions. PtPTL is a free open source library available for download, which is currently found at www.sourceforge.net/projects/ptptl. The stated goal of this project is to spur open innovation in p2p security.

The library allows software developers to add the element of trust to peer applications by providing support for digital certificates, peer authentication, secure storage, public key infrastructure, digital signatures, and symmetric key encryption. It also provides simple support for networking and some operating system primitives, such as threads and locks, to ease the development of applications that are portable to both Win32 and Linux.

So what business areas does P2PWG see adopting peer technology? Most. The organization doesn’t really specify but instead presents the key activity areas, or scenarios common to many, where p2p will make a difference for enterprise.

  • File sharing, which is glossed over a bit because P2PWG emphasizes that p2p is much more than just this popularized functionality. Nevertheless, it remains a major application in many distribution contexts by reducing network traffic, off-loading servers, and using aggregate storage.

  • Collaboration, where individuals and teams are empowered to create and administer real-time and off-line collaboration areas in a variety of ways and locations. Collaboration increases productivity and enables teams in different geographic areas to work together. Requirements on servers and network are lower than corresponding centralized solutions.

  • Edge services, which are billed as “Akamai for the enterprise”—in other words, p2p as a way to deliver services and capabilities more efficiently across geographical boundaries by local caching and adaptive distribution.

  • Distributed computing and resources, where focus is on providing adaptive, large-scale computer processing and storage, and sharing the results among peer systems.

  • Autonomous agents, which enable computing networks to dynamically work together. “Intelligent” agents reside on peer computers, exchange information, and initiate tasks on behalf of other peer systems.

All of these areas, coupled with inexpensive computing power, bandwidth and storage, lit a fire under the p2p movement, as the P2PWG puts it. Curiously, no explicit mention was found about instant messaging—whether due to an oversight, taking IM for granted, or simply not seeing it as legitimate enterprise p2p.

Towards the close of 2001, the focus more and more came to be on what’s termed “Web Services” (in .NET), in part a reflection of the growing importance of p2p services deployed in a Web-XML context, in part as a reaction to the general hype associated with the generic p2p term. Perhaps too, it’s a way of distancing the core of peer technologies from the ever more contentious file-swapping debate. A significant indicator is that the autumn O’Reilly P2P conference was named “Peer-to-Peer and Web Services”, while the corresponding spring 2002 event was renamed as the “O’Reilly Emerging Technology Conference”. The latter’s subtitle “Building the Internet Operating System” is even more indicative of the focus shift, and Tim O’Reilly notes in the Web information (conferences.oreilly.com/etcon/):

Peer-to-peer and Web Services are only the first steps towards the emergence of a distributed Internet operating system—a new platform for next generation applications that are device and location independent, and provide increasingly transparent services.

While 2000 and 2001 saw the term “p2p” used everywhere, a more mature and business-oriented attitude seems more likely to refer to peers and distributed services. Even the individual areas (such as content publishing and sharing) have become more aware of the trust, responsibility, and legal concerns that are so vital to corporate deployment, so we see a convergence on more secure peer systems from all sides.

Finance and Trading

Business areas that require secure trust mechanisms are finance and trading. Mark Hunt, director of XML strategy at London-based information services company Reuters, said this in February 2001 (reported in an article at www.itworld.com/AppDev/4088/IW010212hnp2p/) about the future of peer technologies:

P-to-P is very key, mainly for building community and enabling a flow of good content. In the financial industry context, trading communities generate a lot of ideas. We can use p-to-p and instant messaging to connect that community together to form a feedback loop.

This evaluation, that p2p would be of most interest in financial and trading contexts, which already depend on rapid and direct exchange of information in other media, suggests that companies in these fields would be in the forefront to adopt p2p.

While traditional information search paradigms are used by people to seek out information when they need it (for example on the Web or in databases), networks based on peer technologies have the capability to continuously track information relevant to a particular individual, in close to real time. Thus, asynchronous search is transformed into notification-driven information update, which is a major attraction for businesses that work in rapidly changing information environments and why they would be willing to adopt new technologies to this end.

All these p2p advantages are clear in principle, but practical deployment seems to have been very experimental so far. A keynote for 2001 has been large efforts to create standards, build infrastructure, and in other ways prepare the way for practical client networks in business. The next year or two should therefore prove a watershed in the way some of these interested companies do business.

The Case of the Missing Material

It’s been curiously hard to collect practical case material of explicitly p2p solutions from either business users or from vendors, despite some early, positive responses for contributions. The reluctance to produce may stem from several causes.

  • P2P is seen as an outsider and rogue technology. In some corporate environments, it may have been deployed “under the radar” of the IT department. Those in the know can then be reluctant to attract the spotlight of attention by contributing material.

  • The technology is being used in or around some more sensitive situations (such as military or mission critical). Inquiries coming from an author in a foreign country therefore are viewed with great caution, especially in the current anti-terrorist atmosphere.

  • The technology or its application is viewed as cutting edge. Thus there is a natural reluctance to disclose detailed information that could end up in the hands of competitors.

  • Some vendors may have fallen victim to their own hype or to the general market recession in 2001. Several companies that seemed promising early in the year were later no longer showing any activity, sometimes having dropped off the Web entirely.

  • People perhaps have just been too busy or preoccupied to follow up on earlier responses or even to respond at all. Time after time, I followed published or forwarded contact links, only to run into consistently broken e-mail addresses, overfull mailboxes, or other discouraging signs that these people simply don’t care about contact. Hmm....

This list of excuses aside, some general material could be gathered, or enough inferred, to make a case for shifting the focus to what’s usefully known about the usage patterns associated with the different peer technologies. First, a few brief mentions of real p2p enterprise products and deployments.

Brief Mentions

In this section, a few peer-related enterprise solutions are noted. Small-scale solutions seem rare, perhaps because of the many free p2p implementations.The selection is somewhat arbitrary and is intended only to give an indication of where small business and enterprise p2p solutions are going. The examples are neither tested nor endorsed.

Distributed Storage and Content Delivery

Akamai (www.akamai.com) early made a place for itself on the market by deploying large-scale solutions for aggressively distributed and adaptive storage, and by extension, outsourced e-business infrastructure services and software. The solution is based on a central-server-based technology but in the form of peer clusters distributed geographically. The technology is fairly well documented, and it’s used by Intel as a case study for successful use Internet technology (and of Intel hardware).

The guiding concept of Akamai Network Deployment sounds familiar from the p2p perspective: “The only way for Internet content services to work is to put content at the ‘edge’ of the public network, close to end users”. In this instance, the company’s perspective is global, with solutions to distribute storage and retrieval for multinationals and international service providers. The strategy is to put thousands of small, powerful, relatively inexpensive servers at thousands of locations around the world. It reportedly deploys several hundred new servers every month, and demand shows no sign of slackening. The rack-mounted servers provide a flexible clustering technique that can adapt to virtually any requirements. Akamai created its own suite of Linux-based management applications. As a counter to the extremely distributed architecture, it also set up a secure network operations center (NOC) that is staffed 24 hours a day. Network problems are found and fixed from a single location, in real time, no matter where the problem exists.

Another part of the strategy is a dynamic off-loading of central request-serving points with transparent redirection. While the user might see a single, central Web content provider, the content is actually being served from adaptive caches in any number of locations nearer to the user. High-load providers such as Yahoo!, CNN, and Amazon rely heavily on this form of storage and processing outsourcing, both to cope with loading and to provide redundant reliability in their core services.

To guide the future deployment of edge services, the W3C (World Wide Web Consortium) published a technical recommendation document that defines the Edge Architecture Specification (4 August 2001, www.w3.org/TR/edge-arch), derived from Akamai and Oracle work, which extends the Web infrastructure through the use of HTTP surrogates—intermediaries that act on behalf of an origin server.

Distributed Dispatch Management

Endeavors Technology, Inc. (www.endtech.com) has pledged its support of the Peer-to-Peer Working Group and contributed Magi, a p2p collaboration infrastructure, as a proposed world standard for the flow of information between Web-enabled devices using p2p technologies.

Magi Enterprise gives peer status to PCs, laptops, and Win CE handheld devices, allowing all to securely communicate and interact. Implemented as an end-product, Magi Enterprise securely links office and project teams together for such collaborative needs as file sharing, file searching, instant messaging, and chat. The system utilizes HTTP, WebDAV, and other open standard protocols to create a secure, cross-platform environment for collaborative-intensive applications. A MagiSeek component can search and index files across the community for rapid search and access activities.

One product built atop the Magi system, Magi Dispatch, manages the dispatch, tracking, and closure of field service repairs for service dispatch centers. Armed with no more than a WAP (Wireless Application Protocol) phone, a service technician can be alerted to a service call, obtain driving directions, place part orders, and file status reports. The Dispatch system includes a graphics-based, drag-and-drop development environment to establish work flows and distributed processes. One might also hopefully assume that service technicians can also directly update the support database, from field PDAs or some suitable desktop landing space.

Magi Enterprise for Devices is a p2p software development initiative to transform “unequal” computing devices, such as PDAs, PCs, servers, information appliances, and Internet-ready phones. The common infrastructure opens the door for the creation of collaboration applications for the mobile workforce, and it’s perhaps the basic design for the Dispatch system.

Magi Express is a free version of its Magi P2P software, a fully operational peer infrastructure program that doesn’t require any additional upgrades or equipment to function, with no trial periods or timeouts. It’s billed as an easy-to-use thin server for users to create online collaborative communities.

All solutions are Windows only, although a Mac OS X version is promised.

Proposed is Magi Agent, an advanced process-automation software that is executed securely across peer-connected communities, intended for developing download and payment solutions for the music and video industries, or for creating business eProcesses, such as machine-to-machine commerce, auditing, and tracking in the utilities industry. Another proposed product is Magi E-Commerce software to provide for secure drag-and-drop transactions on the Web.

Home Management

Several vendors, including Endeavors, propose home management systems based on distributed peers for those people with an always-on Internet connection. A dispatcher application can then help homeowners to manage a variety of activities, including scheduling and deliveries. Security applications can also be added using agents that monitor or control motion detection and lighting.

Much could be done (and is) in do-it-yourself ways by individual home owners, given the availability of X.10 or LAN connectivity. What deters many is the lack of standardized structured connectivity, apart from power mains (explaining why X.10 is still around). Ideally, LAN or equivalent bandwidth and signal quality should be available throughout the home. The home owner is still effectively in the analogue position of trying to retrofit an electrical infrastructure given only the main fuse box as a connection point and lots of expanders and extension cords. Wireless networking (Bluetooth or other) might become significant, but physical wires or fiber is better from several points of view, not least the latter for inherent signal security, and cheaper if it can be installed already when the building is constructed.

Physical infrastructure aside, the client situation is better than one might expect. A great deal of Java development is about small clients and embedded, Internet-aware devices, which are inherently intended for p2p deployment. This fact means many home appliances are coming with network connectivity as natural to them as power connection. Separate client adapters can often be added to those that lack embedded clients, and such devices might become as common a commodity in the shops as socket expanders, dimmers, and switch blocks have been before.

Much of this technology is available now, ready to deploy for the home innovator, albeit in a somewhat haphazard way. The process is made difficult by a mix of protocols and a lack of unifying, ready-to-use control software for home PC management. Be prepared to write your own manual as you go.

Health Care Services

InterPro Global Partners (www.interproinc.com) aims to be a pioneer in building Web services with the solution it is implementing for Portarius, a services company that focuses on the health-care industry. In the spring of 1999, San Diego–based Portarius called upon InterPro to help build an e-marketplace for health-care communities.

The first analysis indicated peer technology: Instead of looking at a single hub and spokes model, where services are at the hub and the doctor is pushed out to the edge of the network, our model recognizes that the health-care industry is more granular than that; it puts the local physician at the hub. The selected Web services model gives individual doctors more control in the system.

The solution links medical professionals using devices such as PDAs and PCs to one another over a p2p network. They can share medical records, send lab and pharmaceutical orders, and perform other functions that traditionally create a long paper trail. InterPro installs proprietary server technology it calls a Services Gateway on each device, enabling p2p access to the network and allowing the devices to access information on the servers of other devices hooked up to the network.

The example is described both on the company’s Web site and as an IBM case study of Web services (www-3.ibm.com/software/solutions/webservices/casestudies/interpro.html). It also uses aspects of Endeavor’s Magi networking.

This ends the quick look at some examples of deployable enterprise solutions. We next consider usage cases for the different application categories to see how smaller-scale deployments function in practice.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.212.217