Chapter 4. Making Change Happen

As the opposition’s favorite philosopher put it, “The philosophers have only interpreted the world in various ways; the point, however, is to change it.”[1]

Changing the world is hard; if it was easy, the world would change rather too often. It is not enough to come up with an idea to secure the Internet. The idea is the easy part. For an idea to count, you must persuade others to act.

As the Internet has grown, so has the number of people who must be persuaded to act in order to change. Putting even the simplest new idea into practice has become a major undertaking. In the early days of the Web, a good idea could be realized in a few days. Today it requires a tremendous effort to achieve even small changes within a few years.

That Dizzy Dot.Com Growth

Most engineers prefer engineering to marketing or politics, and persuading them that either is important is a hard task at the best of times. In the Internet world, the task is much harder because many of us experienced a period when it appeared that the Internet and the Web simply took off of their own accord.

For people who started using it in 1995, the growth of the Web was always out of control of any individual or organization. A critical mass of users had been reached and a network effect had taken hold: Each additional user increased the value of being on the Web, and every satisfied user became a new salesperson for the Web.

Let’s take another look at the growth of the Web. The growth of the Web as a proportion of Internet users is believed to have followed the familiar S curve shown in Figure 4-1. At the start of 1993, the number of Web users was negligible; by the middle of 1994, practically every user of the Internet had used the Web.

Estimated growth of the Web as a proportion of Internet users

Figure 4-1. Estimated growth of the Web as a proportion of Internet users

In practice, the growth was rather too fast to be able to measure reliably—the only data points that are known with certainty being the beginning and the end. At one point during the early growth of the Web, a survey of Internet use came out that showed the Web growing at a rate of 150 percent each month. Much was made of the explosive rate of growth which, if sustained, would mean that every sentient being in our galaxy would by now be a user. When growth is so fast, the only questions that are really interesting are where the limit will be reached and how long it will take to reach it. This did not stop academics from producing papers quibbling over whether the techniques used by this survey or that might overstate the number of Internet users by 20 percent or so. The growth of the Internet within the industrial world followed the same S-shaped curve, and the same shape is currently being repeated on a global scale.

I divide the S-growth curve into three parts: startup, growth, and maturity. The part that attracts media attention is the growth phase. This is the point at which breathless stories can be written about the changes being wrought, and so on. The questions for business are the size of the market at maturity and when maturity will be reached. But for the designer trying to change the world, the really important part of the curve is the startup phase, before critical mass has been achieved, before growth has become self-sustaining.

It has become fashionable of late for commercial ventures to promote the network effect that comes from using their product. As Bob Metcalf, the inventor of Ethernet, observed, the value of a computer network is proportional to its size. A network with only one fax machine is useless; it has nothing to talk to. A network with two fax machines has a limited value. A network with a hundred machines is considerably more useful, and when almost every business has one, the fax machine becomes a vital tool.

The downside to the network effect (or to use the fashionable buzzword viral marketing) is that however beneficial it might be during the growth phase, it is known as the chicken and egg problem during startup.

The growth of the Web became self-sustaining at some point in 1993. But getting to that point took a lot of time and effort. In particular, the Web had been designed to allow rapid growth. If you had Internet access, all you needed to use the Web was to download a free program. Publishing on the Web required only a little more effort. The Web was free, which encouraged students to download the programs to try them out. The free trial effect was essential because the value of the Web is something that only really becomes apparent to most people when they either see it demonstrated, or better, use it themselves.

As we saw earlier, Internet crime currently follows an S-shaped curve. If nothing is done, a limit will eventually be reached, but the Internet would probably not be worth using by then.

Already there are some indications that the rate of growth in the number of attacks might be declining and we might be reaching the upper curve. Unfortunately, one likely explanation for this phenomenon is that the professional Internet criminals are becoming more adept at focusing on the most profitable targets.

Fortunately, there is no law of nature that growth must follow an S-curve. The growth of the network hypertext programs that once competed with the Web did not follow that curve. These began growing at more or less the same rate, but the Web began to pull away after a short time. At this point, the network effect began to work against the competing systems as users defected to the larger network.

A hypothetical example of this mode of growth is the bell-shaped curve seen in Figure 4-2.

Growth and decline

Figure 4-2. Growth and decline

The bell-shaped curve of growth and decline is the course that we want to cause Internet crime to follow. To do that, we must introduce security controls that defeat the Internet criminals.

Sometimes it is possible to suggest a security mechanism that, like the Web, is easy to deploy because it provides an immediate benefit to the party who deploys it. Spam filters are an example of this type of security measure; subscribing to a good spam-filtering service immediately reduces the amount of spam annoyance. The effectiveness of my spam filter does not depend on anyone else using one. I call measures of this type tactical; they provide immediate value to the party who deploys them even though they do not necessarily address the root cause of the problem.

Tactical measures are generally preferred by vendors of security products because it is easy to explain the benefit to the customer. The market tends to be efficient in identifying and deploying tactical security measures.

Deployment of security controls where the benefits and the cost of deployment are not so neatly aligned is much harder. This is frequently the case with the measures I call strategic—controls that are designed to make the Internet infrastructure less criminal friendly.

The primary engines for Internet crime are spam and botnets. Nobody wants to be pestered by spam or have his machine recruited into a botnet, and plenty of tactical measures have been developed to help people prevent their machines from being the ones affected by the attackers. The problem with tactical measures is that they tend to work a bit like car alarms, which have a marked effect on which particular cars the thieves attempt to steal. All things being equal, thieves will try to steal the car without an alarm. Car alarms can even deter the opportunist or the joy rider, but they do little to affect the number of cars being stolen by professional thieves.

Considerable effort has gone into tactical measures to prevent machines from being compromised in the first place. Less effort has gone into preventing a compromised machine from being used by the criminals even though, as we shall see later, this is an easier technical problem. The business model for tactical measures is compelling regardless of who deploys them; the business model for strategic measures is only compelling if they are likely to achieve critical mass.

Finding the Killer Application

The key to driving infrastructure deployment is to find a compelling benefit that does not depend on the network already being established. In the computer world, an application that realizes this type of benefit is known as a killer application.

A killer application gets a technology to the point where there is a critical mass of infrastructure to support. After that infrastructure is in place, new applications often supplant it.

The killer application for the microcomputer, the forerunner of the modern PC, was the VisiCalc spreadsheet program. In the early 1980s, typing was strictly for secretaries. A manager would no more think about buying a word processor for his own use than he would a mop and broom. But a spreadsheet was a management tool.

Although e-mail was the killer application for computer networking, it was the World Wide Web that became the killer application for the Internet in its competition with rival networks.

Before the Web, many computer networks were running many different network protocols. The Internet was popular in the U.S., but most UK universities used a network called JANET. The machines I used at Oxford were connected to both JANET and a private network for international research labs working in high-energy physics and astronomy called HEPNET.

Most of the networks were connected to each other through machines called gateways so that it was not necessary to be on the same network as someone to send that person an e-mail. Being on the same network made it easier, of course, but usability was not a priority on HEPNET, a network exclusively for the use of researchers who either had a degree in nuclear physics already or were studying for one.

If e-mail had been the only important application of computer networking, the convergence of the network protocols would have taken considerably longer, if it had been completed at all. The arrival of the Web meant that the network you connected to did make a difference. Even though the early Web software could connect to other computer networks, the Internet was by far the largest and offered the most information. You could receive e-mail on HEPNET, but you couldn’t access Internet Web sites.

The Web itself required a killer application to get started. When the Web first started, the only information that was available on the Web described the Web protocols. The amount of information available grew slowly in the first three years, so slowly in fact that in 1992 I surfed the entire Web in one evening.

Tim Berners-Lee, the inventor of the Web, realized that providing content was the key to making the Web useful and that a lot of information would be required to “prime the pump.” The breakthrough information source was of all things the CERN telephone directory. This was available in two forms—as a printed directory and as an online service—but only on one machine out of the thousands of computer systems that were in use at CERN, a machine running an antiquated operating system deservedly notorious as being a pig to use.

After the CERN phone book was made available on the Web, researchers at CERN discovered that the Web was the quickest way to look up a phone number. As physicists turned to the Web to access the phone book, a critical mass of infrastructure was established to support other uses.

Why Standards Matter

The Internet has almost a billion users. If the Internet is still going to be useful as a global communication medium, most of those users are going to have to use programs that can talk to each other.

Imagine how hard it would be to use electrical appliances if every socket in your house was a different shape and required a different plug. Standard plugs and sockets make taking a lamp from one room to another easy. Try to take it abroad, however, and you need either a plug that meets the local standard or possibly a voltage adapter.

Standardization has not yet reached the other end of the power cord. Each time I travel, I take between five and fifteen electronic devices with me—at a minimum my cell phone, headset, computer, music player, and wireless hub. Each of these devices requires its own “power supply.” Each power supply does almost the same thing, converting AC power at various mains voltages and frequencies into 9 to 12 volts DC. If the various gadget manufacturers would only agree on a standard, I could take one adapter to recharge the whole lot, and there would be room in my travel bag for some new gadgets.

Several manufacturers have tried to address this problem with “universal” power adapters, which offer a range of interchangeable tips to power the assorted devices. Unfortunately, these are a poor substitute for a standard. Most of the adapters will only recharge one device at a time, and each new device usually requires a new tip that is only available from the manufacturer. The miniature USB connector looks likely to emerge as the standard for charging mobile phones and accessories, but progress has been slow, and the specification only supports low power devices.

Design is largely a matter of reducing complexity. Standards reduce complexity by reducing the number of decisions the designer needs to take. In most cases, this is a net benefit even if the design chosen for the standard is not the best from a technical point of view.

U.S. readers will be familiar with a style of light bulb with an Edison screw on the end. European readers are more likely to be familiar with the Swan bayonet fitting, where the end of the light bulb is flat and has two prongs at the end (see Figure 4-3).

Swan bayonet and Edison screw bulbs

Figure 4-3. Swan bayonet and Edison screw bulbs

Like most major inventions, the light bulb had more than one inventor. Although Edison is usually credited with its invention in the U.S., he worked with Joseph Swan, a British inventor who applied for his patent a year earlier.

Edison developed his screw thread design while testing a large number of different filaments to see which gave the most light and were most durable. After painstakingly soldering the first trial bulbs into the test rig, Edison had the idea of mounting each test bulb onto a one-inch screw thread allowing them to be easily inserted and removed.

The Swan bayonet fixture was the invention of Alfred Swan, Joseph’s brother. A prolific inventor in his own right, Alfred worked patiently and methodically on improvements to his brother’s invention—in particular, the design of the light bulb base. His numerous contributions included the invention of Vitrite, a glass-like insulator used for making the bulb base.

By 1895, the number of bulb bases had proliferated, and 14 different variations were in use. Interchanging bulbs required the use of adapters and could be dangerous, possibly causing a fire. An editorial in The Electrical Engineer pled the case for standardization,[2] leading to a debate on the merits of the various designs. Even at the time, it was clear that the Edison screw was flawed. Edison screw bulbs have a tendency to work themselves loose over time, and over-tightening can cause them to break. But in the U.S., at least, the Edison screw won, because it was in widespread use, and making a choice was much more important than making the absolutely best choice.

Marry in Haste, Repent at Leisure

One of the many reasons that setting standards is hard is that the consequences of badly chosen standards are all around us. I am typing at a QWERTY keyboard that was originally designed to prevent typists from typing too fast, causing the machine to jam. The QWERTY arrangement is designed to avoid placing keys close together that are often typed in succession (for example, th). Even though the mechanical issue that led to the problem of jamming was solved decisively within a few years of the invention of QWERTY, we are still stuck with it today. When a mistake is made, the effects can last for decades or even centuries.

The lightning-fast development pace in the early days of the Web led to plenty of mistakes that we have since had cause to regret. Soon after I first saw the Web, I decided it would be nice if there was a way to follow links in the reverse direction, being able to follow the links that point to a page as well as the ones leading from it. This lead to the idea of the “Referer” field, which specifies the URL of the page containing the link the user followed when making the request.

Unfortunately, I typed the field name with one r fewer than some feel it deserves. The shorter spelling has saved vast quantities of network bandwidth. Perhaps there is somewhere the driver of a mechanical digger who might have found several days’ extra employment laying fiber for Internet distribution had I chosen to use the conventional spelling. My choice of spelling has also resulted in endless complaints, some of which include a demand to “correct” it. This would require updates to every Web browser and Web server on the planet. With a billion users and a cost of $50 per user, I believe it is rather more likely that the editors of the Oxford English Dictionary will come around to accept my spelling first.

Although these mistakes were harmless, others resulted in long-term difficulties. The original mechanism for embedding images in Web pages uses pixels as a unit of measure. A pixel is the smallest dot that can be shown on the computer monitor. A typical computer monitor has a resolution of between 75 and 100 dots per inch. Using pixels as a unit of measure made some sense when practically every computer display was roughly the same size and resolution. If you have a high-resolution display that packs twice the usual number of dots into a given space, then all your pictures will shrink to half the size intended. This problem could have been avoided if a bit more time had been available to refine the original design before it became set in stone.

Some people find the idea that the free market might make the “wrong” choice to be unacceptable.[3] But there is no law of economics that dictates that free markets must or should choose the technically superior solution. The dominance of QWERTY demonstrates only that nobody has proposed an alternative that is compelling enough to switch.

Ownership and Control

Early on in the dot-com boom, many people had the idea that if they could just arrange for a “few cents” of every dollar that flowed through the Internet to stick to their fingers, they would become phenomenally rich. Very few of these ideas ever came to anything, not least because the “few cents” that these entrepreneurs felt was their due was often the entire gross margin of the businesses they were attempting to prey upon.

The Internet is a fiercely competitive market where most startups fail having produced nothing but rivers of red ink. There are no easy profits to be made from providing Internet technology. Every penny of revenue earned is a cost that another will resent.

One of the reasons that the dispute over the standard for the light bulb socket took so long to resolve was that it was also a dispute over property. Alfred Swan’s designs did have real advantages over the Edison screw, but these came at a cost because Swan owned patents on both the design and the materials used to build it. By the time that the need for a standard had become evident, the original Edison patents had expired and could be adopted by rival manufacturers without fee.

Agreements over standards inevitably involve economic interests. A patent might soar in value if the technology claimed is essential to implement a standard or become worthless if another technology is chosen.

Patents are not the only potential cause of dispute. Arcane details of a standard might favor one company over its competitors. A large complex standard might favor a company with significant engineering resources over rivals whose resources are severely constrained. A vendor who has already developed a product is likely to push for a standard that closely matches his existing product, whereas a new entrant to the market is likely to prefer a completely fresh approach.

Sometimes a faction might not believe it is in its interests for an agreement to be reached at all. The point of a standard is, after all, to turn an idea into a commodity. This is good for the customer but bad for the established company whose main product is about to be commoditized.

Adam Smith observed that when people of the same trade meet together, the result is usually a conspiracy against the public. Smith’s particular concern was monopolies, which he demonstrated were detrimental to the public good. A standard is a type of monopoly and does not always represent the best interests of the public.

The DVD specification has built-in restrictions to prevent a disk sold in the U.S. from playing on an unmodified DVD player sold in Europe. The purported justification for this restriction is that it allows the major studios to avoid the release of a DVD in the U.S., drawing customers away from a coinciding European theatrical release. Critics note that the scheme also allowed the studios to maintain differential pricing, charging European consumers up to twice the U.S. price for the same material.

A standards process is potentially a highly political affair. Fortunately, the more astute participants usually understand that the reason they are there in the first place is to achieve something together that they cannot achieve alone, and that if they insist on demands that the other participants feel are unreasonable, the result might be that the other parties reach agreement without them.

It is rare for an Internet standard to require the use of a patented technology unless the patent holder agrees to issue a license to use the technology for the purpose of implementing the standard without charge. It makes little sense for 20 or more major companies to send some of their most expert and highly paid engineers to a forum to create a common standard if one party is going to be paid for his efforts and the rest are not. The advantage provided by a patented technology must be very significant to outweigh the cost of the patent license.

In retrospect, one of the mistakes we made in the design of the Web was not introducing a default audio format into the early browsers. This was largely due to the fact that only a few of the engineering workstations that were in use at the time could make sounds. The lack of a default audio format led to a continuous tug of war between rival proprietary audio compression schemes fighting it out to become king of the hill. Even if the default audio format had been the uncompressed stream of samples used in CD players, the fact that every browser supported a common standard would have confined this dispute to a squabble over the optimal means of realizing a capability that the users already had rather than a competition to control the standard for providing that capability. Instead of expecting to receive a patent royalty rate proportional to the size of the market for online audio content (i.e. billions of dollars) the patent holders could only expect to receive royalties proportional to the savings in cost of bandwidth that their invention allowed (i.e. millions).

Another decision that later turned out to have been a mistake was the decision to use what was then known as the CompuServe GIF format. This was a simple compression scheme that had originally been developed by one of the pre-Internet dialup networking services that later became part of AOL. Even at the time, the GIF format was beginning to show its age, but it was widely used, and CompuServe made no proprietary claims to the scheme. Then, in December of 1994, after the GIF algorithm had become firmly entrenched as a Web standard, the Unisys Corporation issued a statement claiming that GIF infringed a patent it held and began demanding substantial royalties from every program that used the technology.

One of the peculiarities of the U.S. patent system at the time the GIF patent was filed was that patent applications were not published until after the patent was granted. If the GIF designers had known that a patent had been applied for, they could easily have chosen a slightly different technique.

Standards Organizations

A standard can come into being in two ways. The first path is the accidental de facto standard that emerged because a well-known design that met a particular need was copied. The power socket in cars that began life as the place one would plug in a cigarette lighter is an example of this type of standard. Nobody sat down with a plan to design a standard power socket for plugging accessories into a car. It just happened that most cars had cigarette lighters with electrical contacts in the same place.

The second path is to take the proposal to an organization that has established a formal standards process. These are known as de jure standards.

The best layperson’s guide to formal standards processes is Kafka’s The Trial[4] where we are told, “All the ways are guarded, even this one.”

There are many ways to establish a standard, but all are guarded. It is often said that one of the nice things about standards is that there are so many to choose from. Today there are three major Internet standards organizations: the IETF, the World Wide Web Consortium (W3C), and Oasis. In addition, ANSI, the IEEE, ITU, and an alphabet soup of smaller, more narrowly focused organizations write or endorse standards.

The proliferation of standards bodies is partly a result of the fact that there is too much work for one body to manage, but there are also social, political, and cultural pressures that lead to fragmentation. Some standards organizations are more professional and effective than others, but all of them might be described by Winston Churchill’s definition of democracy: “the worst possible system of government, apart from all the alternatives.” All standards processes take longer than it appears should be necessary, all involve protracted circular arguments over technical details that are probably unnecessary, and none arrives at a perfect result.

One of the many problems with a standards process is that any activity that requires a large number of people (more than five) to come to agreement inevitably involves a great deal of argument and politics. The process becomes even more difficult when there are 20 to 50 participants who live in different parts of the world and most of the business must be conducted through mailing lists or telephone conferences.

There is, however, an even more fundamental problem with standards processes, which is that there is a big difference between what they can achieve and what many people who attend them believe that they should achieve or might achieve.

A standard that is widely adopted can quickly achieve the status of a kind of law. If you want to sell light bulbs, they have to fit the sockets in the lights. If you want to sell lights, then the sockets have to match the available light bulbs supply. There is, however, no law that approval of a standard will cause it to become widely adopted.

One of the side effects of the explosive growth of the Web was that the need for a formal standard became apparent at an early stage when the number of users was measured in hundreds of thousands rather than tens of millions. I can see now that this accident of history misled me into seeing the primary role of standards processes as a means of initiating change rather than a means of sustaining the momentum of change that is already taking place.

Returning to our S curve for a moment, a standard can enable an application to grow faster and for longer in the growth phase, allowing a higher degree of deployment to be achieved at maturity, but a standard does little to help if you get to the growth phase when you are still stuck at startup.

Less important than the actual standards document are the stakeholders who support it. The real value of a formal standards process is that it provides a means of assembling a broad coalition of supporters who can help it to succeed.

Inclusiveness

In the span of five years, the Internet metamorphosized from an academic experiment connecting a handful of universities into a global medium for communication and commerce. It would be surprising if the institutions that define Internet standards had kept pace with the political implications of this change.

The number of stakeholders affected by any change is now very large, and some feel that they are being excluded from the decision-making process, often with justification.

For many years, a point of contention was the length of time it took to change the domain name system to allow Arabic, Chinese, Korean, and other non-European Internet users to register domain names in their own character set. It was rather hard to believe that this was being given a high priority when the standards documents themselves had to be prepared exclusively in English and using a format that only allows for the 26-letter Roman alphabet.

The official justification for this state of affairs is again inclusiveness, but in the technical rather than the cultural sense. If the standards required state-of-the-art technology to read them, they could only be read by people with access to that technology, and if the technology were to change in the future, the documents might become unreadable. The surest way to ensure technical inclusiveness is to stick to the capabilities of 1960s-era teletype printers (even if the documents only print reliably if you happen to be using a size of paper only used in North America, cannot include readable diagrams, and are difficult to read).

Lack of readability is a particular problem in a document that in theory has been carefully written to avoid confusion and ambiguity. The purpose of modern typography is not just aesthetics; it is to make documents easier to read.

The refusal to adopt a more modern document structure has nothing to do with any shortcomings in the alternatives. The W3C has used HTML as the format for its standards documents for more than a decade without difficulty. The real problem is that engineers working on advanced computer technology are often the most resistant to technical change that might devalue their experience. There is a trade guild aspect to the process.

Computers have always attracted a priesthood believing that their secrets should be reserved for the initiated. People who define their sense of self-importance through the accumulation of arcane knowledge can feel threatened by attempts to render their knowledge obsolete. The idea of making a computer easy for the common man to use is considered a threat.

The trade guild aspect is particularly worrying when you consider that the one interest that is never directly represented at the table are the ordinary users who just want to get their work done, pay their bills, find interesting information, chat, and so on—in short, the people who want to use the computer as a tool rather than an end in itself.

Consistency

Someone once wrote about foolish consistency being the hobgoblin of tiny minds. When it comes to computing systems, consistency counts for a lot.

Consistency is not the same as standardization. For a light bulb to be useful, it must comply with the standard for light bulbs, or it will not fit in the socket. But light bulbs come in different sizes and shapes, so there is more than one standard for light bulbs even though in practice the standards are similar. The standardization board could have chosen one design for the large base and an entirely different design for the smaller size, or it could in a fit of perversity have decided that a large base bulb would screw in clockwise but the small base bulb would have a reverse thread screwed in counterclockwise. From a purely mechanical and electrical point of view, the reverse-threaded bulb would work just as well, but from a usability point of view, the choice would be a disaster.

Consistency matters because machines are used by people, and it takes time and effort for people to learn. Designs that are inconsistent tend to cause people to make mistakes.

Consistency does not come for free, however; it is not even easily defined. Inconsistent design becomes obvious, but consistency is subjective. Making a design consistent takes a considerable amount of time, effort, and skill.

Achieving consistency across a set of related standards is harder still, particularly in the case of a design in a rapidly moving technical field such as computer science. The basic e-mail protocols were defined 25 years ago, the basic protocol of the Web 15 years ago. A great deal was learned about computer network protocol design in the intervening years, and that is reflected in the different design choices made. Experts would probably agree that some of these design choices are at least in part the reason for the success of the Web, whereas others are simply design inconsistencies that provide no real value. But it is doubtful that there would be much agreement as to which choices fall into either category.

The anthropologist Robin Dunbar argued that 150 is a particular number where the size of human groups is concerned.[5] When the group size begins to exceed this number, social relationships start to break down, and the group naturally begins to separate into factions. The size of IETF meetings peaked during the dot-com boom in 2000 at 2810 attendees. Attendance since 2004 has averaged less than half that number but is even so almost 10 times the limit suggested by Dunbar.

The number of individuals participating in the W3C and OASIS working groups is probably comparable although somewhat more difficult to measure because the working groups tend to hold their face-to-face meetings independently.

If consistency is to be achieved, there must be coordination between the different working groups, yet the organizations have already grown beyond the size where direct social interactions scale. The result is that organizations such as the IETF, which have made a concerted attempt to impose a particular concept of consistency on standards, end up with a process that has become unacceptably slow.

Dependency

Standards organization politics is further complicated by the fact that people are not only concerned by what you intend to do; what you might do often worries them even more. And because a lot of people are uncomfortable raising issues that might appear to be accusations, a simple problem can often get bogged down under a barrage of unrelated issues that are really substitutes for their real concern.

One of these problems started to affect the W3C a few years after I left. Sir Tim Berners-Lee, the inventor of the Web and director of the Consortium, has spent a lot of time in recent years on his personal research interest, the Semantic Web. When I joined VeriSign and started working in W3C working groups as a company representative rather than a member of W3C staff, I started to hear complaints about the amount of time and effort the W3C was spending on this work.

I did not understand the complaints at the time because if anyone has earned the right to spend some time doing blue sky research with money from companies who have made a fortune from the Web, then Tim has. If you attend a concert by a virtuoso, you should understand that part of the price for the Mozart and Vivaldi is listening to the Messiaen. The actual sums spent on the work were not large and were in any case mostly funded from government research grants.

It was only several years later that I realized what the real concern behind the complaints. I found myself in a position where a proposal I was supporting as a way to help control spam was being held hostage by a group with a rather different agenda. The complaints about the semantic Web were not really about staffing or resources; the real concern was dependency.

The U.S. Congress suffers from a similar institutional pathology that illustrates the problem rather well: The more popular and urgent a bill is, the less chance it has of becoming law. A bill that is considered “must pass” becomes a magnet for opportunist measures that would have no chance of passing by themselves. The result is known as a “Christmas tree” and as often as not collapses under the weight of the unwanted decorations.

The standards world equivalent is dependency. All standards build on previous work. The protocols for the World Wide Web are built in the manner of a multistory building. The Internet Protocol (IP) forms the foundation, and on top of this is built another layer of protocols called TCP and UDP. HTTP is built on top of TCP and a protocol called DNS, which in turn is built on UDP and TCP. Building on existing work in this way is a good thing.

The problem comes when someone tries to make others do his work of securing deployment by forcing others to make their plans rely on it.

The real concern of the Semantic Web opponents was that they were concerned that all future work by the Consortium might be required to build on the Semantic Web framework. After it was understood that the Semantic Web would be required to stand or fall on its own merits, the opposition disappeared.

The problem of dependency works both ways. Relying on another group to succeed in deploying its work is an obvious hazard. What is less obvious is that allowing others to rely on your work before it is ready is also a mistake. A killer application can quickly become a killed application, which is no use to anyone.

This problem arose in the design of DKIM, a technology designed to help in the fight against spam and phishing discussed in Chapter 13, “Secure Messaging.” The design of DKIM uses the Domain Name System (DNS), which changes user-friendly Internet names such as www.example.com into IP addresses, the Internet equivalent of telephone numbers, which are used to direct packets of data. The question for the designers was whether DKIM should work with the DNS infrastructure as currently deployed or whether it should use proposed changes to the DNS infrastructure.

In smaller companies and ISPs, the same network administrator is often responsible for both the DNS server and the e-mail server. These are often separate duties in large organizations, and one or both services might be outsourced. This constraint means that the cost of upgrading the two independent systems is at least twice as high as the cost of modifying one.

If we assume that the cost of deployment of the two systems is equal, but that the incentive to deploy the e-mail change is twice the incentive to deploy the DNS change, and the two changes take place independently, a simulation results in the growth curve of Figure 4-4.

Deployment of e-mail server and DNS server

Figure 4-4. Deployment of e-mail server and DNS server

If we now assume that the deployment of the e-mail server is dependent on the prior deployment of the DNS server, the deployment of the DNS system is accelerated somewhat, but the deployment of the e-mail server application is practically stalled (see Figure 4-5).

Deployment of e-mail server and DNS system, when linked

Figure 4-5. Deployment of e-mail server and DNS system, when linked

A more effective strategy is to engineer a situation where deployment of the killer application is not dependent on the infrastructure deployment, but there is an additional advantage realized through the infrastructure deployment.

If the deployment of the e-mail server is allowed to go ahead at its own pace and is not made dependent on the DNS server upgrade, but deployment of the e-mail application provides an additional advantage for the DNS deployment, the result is the curve of Figure 4-6. The deployment of the e-mail server is not affected, but the DNS server deployment goes ahead much faster than in any of the previous scenarios.

Deployment of e-mail server and DNS system, with incentive

Figure 4-6. Deployment of e-mail server and DNS system, with incentive

DKIM has been carefully designed so that deployment of upgrades infrastructure, in particular the DNSSEC security protocol described in Chapter 16, “Secure Networks,” is strongly encouraged but not a requirement for deployment of DKIM.

Advocacy

Regardless of the standards strategy, the case for deployment must be made to the wider Internet community beyond the small group that developed the proposal. Unless a proposal addresses a universally acknowledged pain point such as spam, a compelling need must be demonstrated. In every case, the advocates for the proposal must provide a convincing argument for how it will address the problem and why it is better than any alternatives on offer, including the always-present alternative of carrying on as before.

In recent years, there has been a trend toward the creation of advocacy groups that push for the deployment of technology developed in another forum. Sometimes the advocacy group is focused on a particular specification, but in other cases the group is focused on a specific industry or a particular problem and proposes a range of technologies that it believes address that need.

The main tools of technology advocates are public speaking at conferences, industry association meetings, and interviews with the media. Occasionally, they might find time to write a book such as the one you are now reading.

The effect is cumulative. The first time an idea is aired, it is usually met with indifference or outright hostility. Hostility is a good sign because it at least shows that someone is listening. To convince people that the benefits of the proposal outweigh the cost or other disadvantages, they have to hear the same message repeated from many different sources.

The Four Horsemen of Internet Change

In 1798, Thomas Malthus predicted[6] that if humans did not limit population growth of their own accord, this would be achieved by “the ravages of war, pestilence, famine, or the convulsions of nature.” The four forces that Malthus predicted would compel change are traditionally identified with the four horsemen of the apocalypse in Revelation 6:1-8.

The forces that compel change in the commercial world are less spectacular than the biblical riders of the apocalypse, but they are equally effective: customers, liability, audit, and regulation.

Customers

Vendors who intend to survive listen to their customers. Most medium to large companies have extensive mechanisms for soliciting input from their customers. But the most effective means by which customers communicate with their vendors is by soliciting tenders for the acquisition of new technology through a Request For Proposals (RFP) process. A feature that appears infrequently in RFPs is unlikely to make it into a product roadmap unless the software vendor is convinced it offers significant advantages. A feature that appears frequently in RFPs is almost certain to make it into the product roadmap regardless of whether the software vendor thinks it has value.

This provides a powerful opportunity to effect change by appealing directly to the authors of RFPs. The task of writing RFPs is often outsourced to a consulting company that advises on the procurement process. This means that a single presentation at the right trade show can result in a question about support for a standard being added to a large number of RFP check lists.

Liability

Opinions differ as to who should be held liable for losses due to Internet crime. At some point, those differences will be argued extensively and expensively in a variety of law courts.

In the medium term, the question of who is liable will inevitably be settled in favor of the consumer. The experience of the credit card companies is instructive. After fighting government regulations to make them liable for fraud losses tooth and nail, the card issuers discovered that it was the most important benefit of their product.

Liability is an unpredictable cost that businesses typically attempt to make predictable by means of insurance. The insurer is in turn anxious to keep losses to a minimum by either requiring customers to take appropriate security measures or discounting insurance rates for those who do.

Audit

One of the major factors that is driving deployment of corporate computer security measures today is audit requirements.

One of the largely unforeseen side effects of the U.S. Sarbanes-Oxley act is that for a company to be confident that its accounting information is accurate, it must be confident that the computer systems used to prepare and analyze that data are secure.

Security audits are also being required by large numbers of companies that process data covered by the European Union privacy directive.

Audits of information systems frequently result in a “domino effect” in which an audit of one system leads to a requirement to audit the systems that feed it, which in turn lead to a requirement to audit the systems that feed them. The “Y2K fever” over the millennium bug was largely driven by companies requesting their suppliers to provide a satisfactory Y2K audit, which required the suppliers to request a Y2K audit from their suppliers in turn.

Regulation

Internet crime, like every other form of crime, is a government concern. Government action is inevitable if Internet crime cannot be controlled by any other means.

There is a marked difference in the approach to regulation in Europe and the U.S. The U.S. approach is generally to resist legislation on principle until the last minute, thus ensuring that the legislation can only be passed when the pressure has become irresistible and the ability to mitigate undesirable effects has been lost. European governments tend to be much more open to legislate at an early stage, particularly when the government in question has absolutely no intention of enforcing the measures passed.

Regulation is, unfortunately, a blunt instrument, particularly in a field that moves as fast as technology. The information available to government policy makers is unfortunately no better than the information available to any other party. As a result, technology regulation occasionally appears inspired, but more usually it ends up backing the wrong technology.

Germany was widely praised for promoting the deployment of digital ISDN telephone technology in the late 1980s. This policy has looked less visionary in the decades since and is now seen as having delayed the adoption of high-speed Internet access via ADSL.

Regulation is a blunt instrument, but one that becomes inevitable if the other forces fail to result in a satisfactory rate of change.

Key Points

  • Proposing change is not enough. Advocacy and strategy are required if others are going to act on the proposal.

    • It is particularly important to unlearn the lessons of the “dot-com bubble.” Many of the factors that made change happen during that period no longer apply.

  • Proposals might be tactical or strategic according to how the benefits affect the party deploying the solution.

    • A tactical proposal can be deployed unilaterally and provides an immediate benefit to the party who deploys it.

      • Tactical proposals do not require a critical mass to provide value and can be deployed quickly.

      • Tactical proposals are more marketable, but it is not always possible to provide a tactical solution.

    • A strategic proposal changes the Internet infrastructure, usually requiring changes to made that do not provide an immediate benefit to the party deploying them.

      • For a strategic proposal to be worthwhile, adopting it must provide a significant benefit.

      • The key to driving deployment of the infrastructure required to support a strategic solution is to find a killer application.

        • A killer application is simply a pump-primer; it need not be a major use of the infrastructure after it is deployed.

  • Standards are important accelerators for adoption but do not drive adoption in themselves.

    • Incompatibility leads to expense.

    • After they are established, standards are hard to change.

  • Standards raise complicated questions of ownership and control.

    • The existence of a standard can transform a worthless patent claim into an essential and valuable one.

    • Most participants in a standards process will attempt to prevent any patent claim becoming essential in this way.

  • Standards organizations play a major role in setting standards.

    • The purpose of a standards organization is to generate the constituency needed for deployment.

    • If you are trying to do design in a committee, you are making a mistake.

    • Standards organizations have their own agendas, which might include inclusiveness and consistency.

      • These are generally desirable outcomes but might be interpreted in counterproductive fashion.

    • If the price of getting a standard ratified is dependency, it is almost certainly better to walk out of the room without ratification.

      • Dependency kills deployment.

  • Commercial entities are driven by the four horsemen: customer requirements, liability, audit, and regulation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.96.93