Akamai and BitTorrent

Both Akamai and BitTorrent address the challenge of distributing large volumes of information across huge networks, striving to minimize bandwidth consumption and delays that users might notice. Their approaches to solving these problems, however, are very different.

Applicable Web 2.0 Patterns

This comparison discusses the following patterns:

  • Service-Oriented Architecture

  • Software as a Service

  • Participation-Collaboration

  • The Synchronized Web

You can find more information on these patterns in Chapter 7.

Alternate Solutions to Bandwidth

Akamai and BitTorrent both avoid the issue of a single host trying to supply bandwidth-intensive content to a potentially global audience. A single server starts to slow down as it reaches its maximum capability, and the network in the immediate vicinity of the host server is affected because it’s handling a higher amount of traffic.

Again, the incumbent in this case (Akamai) has significantly changed its mechanics and infrastructure since the date of the original brainstorming session when the comparison of Web 1.0 and Web 2.0 was made (as depicted in Figure 3-1). Accordingly, understanding the patterns and advantages of each system is a good idea for budding Web 2.0 entrepreneurs. You shouldn’t view Akamai as antiquated. It is performing tremendously well financially, far outstripping many of the Web 2.0 companies mentioned in this book. It’s been one of NASDAQ’s top-performing stocks, reporting 47% growth and revenues of $636 million in 2007. With 26,000 servers, Akamai is also a huge Internet infrastructure asset.

Akamai’s original approach was to sell customers a distributed content-caching service. Its aim was simply to resolve bandwidth issues, and it solved that problem very well. If a customer like CNN News decided to host a video of a newscast, the content on the CNN server would be pulled through the Akamai network. The centrally located CNN server bank would modify the URIs of the video and other bandwidth-intensive content by morphing them to URLs of resources that were easier for the client making the request to access, often because they were hosted in physically closer locations. The client’s browser would load the HTML template, which would tell it to hit the Akamai network for the additional resources it required to complete the content-rendering process. At the time of this writing, end users do not see any indication of Akamai.com being used (although streaming videos do require modification of URLs).

Figure 3-6 shows Akamai’s core architecture (as analyzed when used in Figure 3-1).

Overview of Akamai core pattern (courtesy of Akamai)

Figure 3-6. Overview of Akamai core pattern (courtesy of Akamai)

Pulling richer media (the larger files) from a system closer to the end user improves the user experience because it results in faster-loading content and streams that are more reliable and less susceptible to changes in routing or bandwidth capabilities between the source and target. Note that the Akamai EdgeComputing infrastructure is federated worldwide and users can pull files as required. Although Akamai is best known for handling HTML, graphics, and video content, it also offers accelerators for business applications such as WebSphere and SAP and has a new suite to accelerate AJAX applications.

BitTorrent is also a technology for distributing large amounts of data widely, without the original distributor incurring all the costs associated with hardware, hosting, and bandwidth resources. However, as illustrated in Figure 3-7, it uses a peer-to-peer (P2P) architecture quite different from Akamai’s. Instead of the distributor alone servicing each recipient, in BitTorrent the recipients also supply data to newer recipients, significantly reducing the cost and burden on any one source, providing redundancy against system problems, and reducing dependence on the original distributor. This encompasses the concept of a “web of participation,” often touted as one of the key changes in Web 2.0.

BitTorrent’s pattern of P2P distribution

Figure 3-7. BitTorrent’s pattern of P2P distribution

BitTorrent enables this pattern by getting its users to download and install a client application that acts as a peer node to regulate upstream and downstream caching of content. The viral-like propagation of files provides newer clients with several places from which they can retrieve files, making their download experiences smoother and faster than if they all downloaded from a single web server. Each person participates in such a way that the costs of keeping the network up and running are shared, mitigating bottlenecks in network traffic. It’s a classic architecture of participation and so qualifies for Web 2.0 status, even if BitTorrent is not strictly a “web app.”

The BitTorrent protocol is open to anyone who wants to implement it. Using the protocol, each connected peer should be able to prepare, request, and transmit files over the network. To use the BitTorrent protocol to share a file, the owner of the file must first create a “torrent” file. The usual convention is to append .torrent to the end of the filename. Every *.torrent file must specify the URL of the tracker via an “announce” element. The file also contains an “info” section that contains a (suggested) name for the file, its length, and its metadata. BitTorrent clients use the Secure Hashing Algorithm-1 (SHA-1) to make declarations that let any client detect whether the file is intact and complete.

Decentralization has always been a hallmark of the Internet, appearing in many different guises that come (and sometimes go) in waves. Architecturally, this pattern represents a great way to guard against points of failure or slowdowns, as it is both self-scaling and self-healing.[38] A very elegant architectural trait of peer to peer in particular is that the more people there are interested in a file, the more it will propagate, resulting in more copies being available for download to help meet the demand.



[38] Cloud computing, in which developers trust their programs to run as services on others’ hardware, may seem like a return to centralization (“All those programs run on Amazon S3 and EC2....”). The story is more complicated than that, however, as cloud computing providers have the opportunity to give their customers the illusion of centralization and the easy configuration that comes with it, while supporting a decentralized infrastructure underneath.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.173.232