Appendix C. Notes, Bibliography, and Acknowledgements

A Brief History of Hackerdom

Notes

Note 1

Levy, Steven; Hackers, Anchor/Doubleday 1984, ISBN 0-385-19195-2.

Note 2

Raymond, Eric S.; The New Hacker’s Dictionary, MIT Press, 3rd edition 1996. ISBN 0-262-68092-0.

Note 3

David E. Lundstrom gave us an anecdotal history of the Real Programmer era in A Few Good Men From UNIVAC, 1987, ISBN-0-262-62075-8.

The Cathedral and the Bazaar

Notes

Note 4

In Programing Pearls, the noted computer-science aphorist Jon Bentley comments on Brooks’s observation with If you plan to throw one away, you will throw away two. He is almost certainly right. The point of Brooks’s observation, and Bentley’s, isn’t merely that you should expect first attempt to be wrong, it’s that starting over with the right idea is usually more effective than trying to salvage a mess.

Note 5

Examples of successful open-source, bazaar development predating the Internet explosion and unrelated to the Unix and Internet traditions have existed. The development of the info-Zip (http://www.cdrom.com/pub/infozip/) compression utility during 1990-x1992, primarily for DOS machines, was one such example. Another was the RBBS bulletin board system (again for DOS), which began in 1983 and developed a sufficiently strong community that there have been fairly regular releases up to the present (mid-1999) despite the huge technical advantages of Internet mail and file-sharing over local BBSs. While the info-Zip community relied to some extent on Internet mail, the RBBS developer culture was actually able to base a substantial on-line community on RBBS that was completely independent of the TCP/IP infrastructure.

Note 6

That transparency and peer review are valuable for taming the complexity of OS development turns out, after all, not to be a new concept. In 1965, very early in the history of time-sharing operating systems, Corbató and Vyssotsky, co-designers of the Multics operating system, http://www.multicians.org/fjcc1.html wrote:

It is expected that the Multics system will be published when it is operating substantially... Such publication is desirable for two reasons: First, the system should withstand public scrutiny and criticism volunteered by interested readers; second, in an age of increasing complexity, it is an obligation to present and future system designers to make the inner operating system as lucid as possible so as to reveal the basic system issues.

Note 7

John Hasler has suggested an interesting explanation for the fact that duplication of effort doesn’t seem to be a net drag on open-source development. He proposes what I’ll dub Hasler’s Law: the costs of duplicated work tend to scale sub-qadratically with team size—that is, more slowly than the planning and management overhead that would be needed to eliminate them.

This claim actually does not contradict Brooks’s Law. It may be the case that total complexity overhead and vulnerability to bugs scales with the square of team size, but that the costs from duplicated work are nevertheless a special case that scales more slowly. It’s not hard to develop plausible reasons for this, starting with the undoubted fact that it is much easier to agree on functional boundaries between different developers’ code that will prevent duplication of effort than it is to prevent the kinds of unplanned bad interactions across the whole system that underly most bugs.

The combination of Linus’s Law and Hasler’s Law suggests that there are actually three critical size regimes in software projects. On small projects (I would say one to at most three developers) no management structure more elaborate than picking a lead programmer is needed. And there is some intermediate range above that in which the cost of traditional management is relatively low, so its benefits from avoiding duplication of effort, bug-tracking, and pushing to see that details are not overlooked actually net out positive.

Above that, however, the combination of Linus’s Law and Hasler’s Law suggests there is a large-project range in which the costs and problems of traditional management rise much faster than the expected cost from duplication of effort. Not the least of these costs is a structural inability to harness the many-eyeballs effect, which (as we’ve seen) seems to do a much better job than traditional management at making sure bugs and details are not overlooked. Thus, in the large-project case, the combination of these laws effectively drives the net payoff of traditional management to zero.

Note 8

The split between Linux’s experimental and stable versions has another function related to, but distinct from, hedging risk. The split attacks another problem: the deadliness of deadlines. When programmers are held both to an immutable feature list and a fixed drop-dead date, quality goes out the window and there is likely a colossal mess in the making. I am indebted to Marco Iansiti and Alan MacCormack of the Harvard Business School for showing me me evidence that relaxing either one of these constraints can make scheduling workable.

One way to do this is to fix the deadline but leave the feature list flexible, allowing features to drop off if not completed by deadline. This is essentially the strategy of the “stable” kernel branch; Alan Cox (the stable-kernel maintainer) puts out releases at fairly regular intervals, but makes no guarantees about when particular bugs will be fixed or what features will beback-ported from the experimental branch.

The other way to do this is to set a desired feature list and deliver only when it is done. This is essentially the strategy of the experimental kernel branch. De Marco and Lister cited research showing that this scheduling policy (“wake me up when it’s done”) produces not only the highest quality but, on average, shorter delivery times than either “realistic” or “aggressive” scheduling.

I have come to suspect (as of early 2000) that in earlier versions of this essay I severely underestimated the importance of the wake me up when it’s done anti-deadline policy to the open-source community’s productivity and quality. General experience with the rushed GNOME 1.0 release in 1999 suggests that pressure for a premature release can neutralize many of the quality benefits open source normally confers.

It may well turn out to be that the process transparency of open source is one of three co-equal drivers of its quality, along with wake me up when it’s done scheduling and developer self-selection.

Note 9

It’s tempting, and not entirely inaccurate, to see the core-plus-halo organization characteristic of open-source projects as an Internet-enabled spin on Brooks’s own recommendation for solving the N-squared complexity problem, the “surgical-team” organization—but the differences are significant. The constellation of specialist roles such as “code librarian” that Brooks envisioned around the team leader doesn’t really exist; those roles are executed instead by generalists aided by toolsets quite a bit more powerful than those of Brooks’s day. Also, the open-source culture leans heavily on strong Unix traditions of modularity, APIs, and information hiding—none of which were elements of Brooks’s prescription.

Note 10

The respondent who pointed out to me the effect of widely varying trace path lengths on the difficulty of characterizing a bug speculated that trace-path difficulty for multiple symptoms of the same bug varies “exponentially” (which I take to mean on a Gaussian or Poisson distribution, and agree seems very plausible). If it is experimentally possible to get a handle on the shape of this distribution, that would be extremely valuable data. Large departures from a flat equal-probability distribution of trace difficulty would suggest that even solo developers should emulate the bazaar strategy by bounding the time they spend on tracing a given symptom before they switch to another. Persistence may not always be a virtue...

Note 11

An issue related to whether one can start projects from zero in the bazaar style is whether the bazaar style is capable of supporting truly innovative work. Some claim that, lacking strong leadership, the bazaar can only handle the cloning and improvement of ideas already present at the engineering state of the art, but is unable to push the state of the art. This argument was perhaps most infamously made by the Halloween Documents, http://www.opensource.org/halloween/, two embarrassing internal Microsoft memoranda written about the open-source phenomenon. The authors compared Linux’s development of a Unix-like operating system to chasing taillights, and opined (once a project has achieved “parity” with the state-of-the-art), the level of management necessary to push towards new frontiers becomes massive.

There are serious errors of fact implied in this argument. One is exposed when the Halloween authors themseselves later observe that often [...] new research ideas are first implemented and available on Linux before they are available / incorporated into other platforms.

If we read open source for Linux, we see that this is far from a new phenomenon. Historically, the open-source community did not invent Emacs or the World Wide Web or the Internet itself by chasing taillights or being massively managed—and in the present, there is so much innovative work going on in open source that one is spoiled for choice. The GNOME project (to pick one of many) is pushing the state of the art in GUIs and object technology hard enough to have attracted considerable notice in the computer trade press well outside the Linux community. Other examples are legion, as a visit to Freshmeat, http://freshmeat.net/, on any given day will quickly prove.

But there is a more fundamental error in the implicit assumption that the cathedral model (or the bazaar model, or any other kind of management structure) can somehow make innovation happen reliably. This is nonsense. Gangs don’t have breakthrough insights—even volunteer groups of bazaar anarchists are usually incapable of genuine originality, let alone corporate committees of people with a survival stake in some status quo ante. Insight comes from individuals. The most their surrounding social machinery can ever hope to do is to be responsive to breakthrough insights—to nourish and reward and rigorously test them instead of squashing them.

Some will characterize this as a romantic view, a reversion to outmoded lone-inventor stereotypes. Not so; I am not asserting that groups are incapable of developing breakthrough insights once they have been hatched; indeed, we learn from the peer-review process that such development groups are essential to producing a high-quality result. Rather I am pointing out that every such group development starts from—is necessarily sparked by—one good idea in one person’s head. Cathedrals and bazaars and other social structures can catch that lightning and refine it, but they cannot make it on demand.

Therefore the root problem of innovation (in software, or anywhere else) is indeed how not to squash it—but, even more fundamentally, it is how to grow lots of people who can have insights in the first place.

To suppose that cathedral-style development could manage this trick but the low entry barriers and process fluidity of the bazaar cannot would be absurd. If what it takes is one person with one good idea, then a social milieu in which one person can rapidly attract the cooperation of hundreds or thousands of others with that good idea is going inevitably to out-innovate any in which the person has to do a political sales job to a hierarchy before he can work on his idea without risk of getting fired.

And, indeed, if we look at the history of software innovation by organizations using the cathedral model, we quickly find it is rather rare. Large corporations rely on university research for new ideas (thus the Halloween Documents authors’ unease about Linux’s facility at coopting that research more rapidly). Or they buy out small companies built around some innovator’s brain. In neither case is the innovation native to the cathedral culture; indeed, many innovations so imported end up being quietly suffocated under the “massive level of management” the Halloween Documents’ authors so extol.

That, however, is a negative point. The reader would be better served by a positive one. I suggest, as an experiment, the following:

  • Pick a criterion for originality that you believe you can apply consistently. If your definition is I know it when I see it, that’s not a problem for purposes of this test.

  • Pick any closed-source operating system competing with Linux, and a best source for accounts of current development work on it.

  • Watch that source and Freshmeat for one month. Every day, count the number of release announcements on Freshmeat that you consider original work. Apply the same definition of original to announcements for that other OS and count them.

  • Thirty days later, total up both figures.

The day I wrote this, Freshmeat carried twenty-two release announcements, of which three appear they might push state of the art in some respect, This was a slow day for Freshmeat, but I will be astonished if any reader reports as many as three likely innovations a month in any closed-source channel.

Note 12

We now have history on a project that, in several ways, may provide a more indicative test of the bazaar premise than fetchmail; EGCS, http://egcs.cygnus.com/, the Experimental GNU Compiler System.

This project was announced in mid-August of 1997 as a conscious attempt to apply the ideas in the early public versions of Chapter 2. The project founders felt that the development of GCC, the Gnu C Compiler, had been stagnating. For about twenty months afterwards, GCC and EGCS continued as parallel products—both drawing from the same Internet developer population, both starting from the same GCC source base, both using pretty much the same Unix toolsets and development environment. The projects differed only in that EGCS consciously tried to apply the bazaar tactics I have previously described, while GCC retained a more cathedral-like organization with a closed developer group and infrequent releases.

This was about as close to a controlled experiment as one could ask for, and the results were dramatic. Within months, the EGCS versions had pulled substantially ahead in features; better optimization, better support for FORTRAN and C++. Many people found the EGCS development snapshots to be more reliable than the most recent stable version of GCC, and major Linux distributions began to switch to EGCS.

In April of 1999, the Free Software Foundation (the official sponsors of GCC) dissolved the original GCC development group and officially handed control of the project to the the EGCS steering team.

Note 13

Of course, Kropotkin’s critique and Linus’s Law raise some wider issues about the cybernetics of social organizations. Another folk theorem of software engineering suggests one of them; Conway’s Law—commonly stated as If you have four groups working on a compiler, you’ll get a 4-pass compiler. The original statement was more general: Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations. We might put it more succinctly as The means determine the ends, or even Process becomes product.

It is accordingly worth noting that in the open-source community organizational form and function match on many levels. The network is everything and everywhere: not just the Internet, but the people doing the work form a distributed, loosely coupled, peer-to-peer network that provides multiple redundancy and degrades very gracefully. In both networks, each node is important only to the extent that other nodes want to cooperate with it.

The peer-to-peer part is essential to the community’s astonishing productivity. The point Kropotkin was trying to make about power relationships is developed further by the SNAFU Principle: True communication is possible only between equals, because inferiors are more consistently rewarded for telling their superiors pleasant lies than for telling the truth. Creative teamwork utterly depends on true communication and is thus very seriously hindered by the presence of power relationships. The open-source community, effectively free of such power relationships, is teaching us by contrast how dreadfully much they cost in bugs, in lowered productivity, and in lost opportunities.

Further, the SNAFU principle predicts in authoritarian organizations a progressive disconnect between decision-makers and reality, as more and more of the input to those who decide tends to become pleasant lies. The way this plays out in conventional software development is easy to see; there are strong incentives for the inferiors to hide, ignore, and minimize problems. When this process becomes product, software is a disaster.

Bibliography

I quoted several bits from Frederick P. Brooks’s classic The Mythical Man-Month because, in many respects, his insights have yet to be improved upon. I heartily recommend the 25th Anniversary edition from Addison-Wesley (ISBN 0-201-83595-9), which adds his 1986 No Silver Bullet paper.

The new edition is wrapped up by an invaluable 20-years-later retrospective in which Brooks forthrightly admits to the few judgements in the original text which have not stood the test of time. I first read the retrospective after the first public version of this essay was substantially complete, and was surprised to discover that Brooks attributed bazaar-like practices to Microsoft! (In fact, however, this attribution turned out to be mistaken. In 1998 we learned from the Halloween Documents (http://www.opensource.org/halloween/) that Microsoft’s internal developer community is heavily balkanized, with the kind of general source access needed to support a bazaar not even truly possible.)

Gerald M. Weinberg’s The Psychology Of Computer Programming (New York, Van Nostrand Reinhold 1971) introduced the rather unfortunately-labeled concept of egoless programming. While he was nowhere near the first person to realize the futility of the principle of command, he was probably the first to recognize and argue the point in particular connection with software development.

Richard P. Gabriel, contemplating the Unix culture of the pre-Linux era, reluctantly argued for the superiority of a primitive bazaar-like model in his 1989 paper LISP: Good News, Bad News, and How To Win Big. Though dated in some respects, this essay is still rightly celebrated among LISP fans (including me). A correspondent reminded me that the section titled Worse Is Better reads almost as an anticipation of Linux. The paper is accessible on the World Wide Web at http://www.naggum.no/worse-is-better.html">http://www.naggum.no/worse-is-better.html.

De Marco and Lister’s Peopleware: Productive Projects and Teams (New York; Dorset House, 1987; ISBN 0-932633-05-6) is an underappreciated gem which I was delighted to see Fred Brooks cite in his retrospective. While little of what the authors have to say is directly applicable to the Linux or open-source communities, the authors’ insight into the conditions necessary for creative work is acute and worthwhile for anyone attempting to import some of the bazaar model’s virtues into a commercial context.

Finally, I must admit that I very nearly called this essay The Cathedral and the Agora, the latter term being the Greek for an open market or public meeting place. The seminal agoric systems papers by Mark Miller and Eric Drexler, by describing the emergent properties of market-like computational ecologies, helped prepare me to think clearly about analogous phenomena in the open-source culture when Linux rubbed my nose in them five years later. These papers are available on the Web at http://www.agorics.com/agorpapers.html.

Acknowledgements

This essay was improved by conversations with a large number of people who helped debug it. Particular thanks to Jeff Dutky , who suggested the debugging is parallelizable formulation, and helped develop the analysis that proceeds from it. Also to Nancy Lebovitz for her suggestion that I emulate Weinberg by quoting Kropotkin. Perceptive criticisms also came from Joan Eslinger and Marty Franz of the General Technics list. Glen Vandenburg pointeed out the importance of self-selection in contributor populations and suggested the fruitful idea that much development rectifies bugs of omission; Daniel Upper suggested the natural analogies for this. I’m grateful to the members of PLUG, the Philadelphia Linux User’s group, for providing the first test audience for the first public version of this essay. Paula Matuszek enlightened me about the practice of software management. Phil Hudson reminded me that the social organization of the hacker culture mirrors the organization of its software, and vice-versa. John Buck pointed out that MATLAB makes an instructive parallel to Emacs. Russell Johnston brought me to consciousness about some of the mechanisms discussed in How Many Eyeballs Tame Complexity. Finally, Linus Torvalds’s comments were helpful and his early endorsement very encouraging.

Homesteading the Noosphere

Notes

Note 14

The term noosphere is an obscure term of art in philosophy. It is pronounced KNOW-uh-sfeer (two o-sounds, one long and stressed, one short and unstressed tending towards schwa). If one is being excruciatingly correct about one’s orthography, the term is properly spelled with a diaeresis over the second o to mark it as a separate vowel.

In more detail; this term for the sphere of human thought derives from the Greek noos meaning mind or intelligence. It was invented by E. LeRoy in Les origines humaines et l’evolution de l’intelligence (Paris 1928). It was popularized first by the Russian biologist and pioneering ecologist Vladimir Ivanovich Vernadsky, (1863-1945), then by the Jesuit paleontologist/philosopher Pierre Teilhard de Chardin (1881-1955). It is with Teilhard de Chardin’s theory of future evolution to a form of pure mind culminating in union with the Godhead that the term is now primarily associated.

Note 15

David Friedman, one of the most lucid and accessible thinkers in contemporary economics, has written an excellent outline of the history and logic of intellectual-property law (http://www.best.com/~ddfr/Academic/Course_Pages/L_and_E_LS_98/Why_Is_Law/Why_Is_Law_Chapter_11.html). I recommend it as a starting point to anyone interested in these issues.

Note 16

One interesting difference between the Linux and BSD worlds is that the Linux kernel (and associated OS core utilities) have never forked, but BSD’s has, at least three times. What makes this interesting is that the social structure of the BSD groups is centralized in a way intended to define clear lines of authority and to prevent forking, while the decentralized and amorphous Linux community takes no such measures. It appears that the projects which open up development the most actually have the least tendency to fork!

Henry Spencer () suggests that, in general, the stability of a political system is inversely proportional to the height of the entry barriers to its political process. His analysis is worth quoting here:

One major strength of a relatively open democracy is that most potential revolutionaries find it easier to make progress toward their objectives by working via the system rather by attacking it. This strength is easily undermined if established parties act together to raise the bar, making it more difficult for small dissatisfied groups to see some progress made toward their goals.

(A similar principle can be found in economics. Open markets have the strongest competition, and generally the best and cheapest products. Because of this, it’s very much in the best interests of established companies to make market entry more difficult—for example, by convincing governments to require elaborate RFI testing on computers, or by creating consensus standards which are so complex that they cannot be implemented effectively from scratch without large resources. The markets with the strongest entry barriers are the ones that come under the strongest attack from revolutionaries, e.g. the Internet and the Justice Dept. vs. the Bell System.)

An open process with low entry barriers encourages participation rather than secession, because one can get results without the high overheads of secession. The results may not be as impressive as what could be achieved by seceding, but they come at a lower price, and most people will consider that an acceptable tradeoff. (When the Spanish government revoked Franco’s anti-Basque laws and offered the Basque provinces their own schools and limited local autonomy, most of the Basque Separatist movement evaporated almost overnight. Only the hard-core Marxists insisted that it wasn’t good enough.)

Note 17

There are some subtleties about rogue patches. One can divide them into friendly and unfriendly types. A friendly patch is designed to be merged back into the project’s main-line sources under the maintainer’s control (whether or not that merge actually happens); an unfriendly one is intended to yank the project in a direction the maintainer doesn’t approve. Some projects (notably the Linux kernel itself) are pretty relaxed about friendly patches and even encourage independent distribution of them as part of their beta-test phase. An unfriendly patch, on the other hand, represents a decision to compete with the original and is a serious matter. Maintaining a whole raft of unfriendly patches tends to lead to forking.

Note 18

I am indebted to Michael Funk for pointing out how instructive a contrast with hackers the pirate culture is. Linus Walleij has posted an analysis of their cultural dynamics that differs from mine (describing them as a scarcity culture) in http://www.df.lth.se/~triad/papers/Raymond_D00dz.html, A Comment on Warez D00dz Culture.

The contrast may not last. Former cracker Andrej Brandt reports that he believes the cracker/warez d00dz culture is now withering away, with its brightest people and leaders assimilating to the open-source world. Independent evidence for this view may be provided by a precedent-breaking July 1999 action of the cracker group calling itself Cult of the Dead Cow. They have released their Back Orifice 2000 for breaking Microsoft Windows security tools under the GPL.

Note 19

In evolutionary terms, the craftsman’s urge itself may (like internalized ethics) be a result of the high risk and cost of deception. Evolutionary psychologists have collected experimental evidence Note 14 that human beings have brain logic specialized for detecting social deceptions, and it is fairly easy to see why our ancestors should have been selected for ability to detect cheating. Therefore, if one wishes to have a reputation for personality traits that confer advantage but are risky or costly, it may actually be better tactics to actually have these traits than to fake them. (Honesty is the best policy)

Evolutionary psychologists have suggested that this explains behavior like barroom fights. Among younger adult male humans, having a reputation for toughness is both socially and (even in today’s feminist-influenced climate) sexually useful. Faking toughness, however, is extremely risky; the negative result of being found out leaves one in a worse position than never having claimed the trait. The cost of deception is so high that it is sometimes better minimaxing to internalize toughness and risk serious injury in a fight to prove it. Parallel observations have been made about less controversial traits like honesty.

Though the primary meditation-like rewards of creative work should not be underestimated, the craftsman’s urge is probably at least in part just such an internalization (where the base trait is capacity for painstaking work or something similar).

Handicap theory may also be relevant. The peacock’s gaudy tail and the stag’s massive rack of antlers are sexy to females because they send a message about the health of the male (and, consequently, its fitness to sire healthy offspring). They say: I am so vigorous that I can afford to waste a lot of energy on this extravagant display. Giving away source code, like owning a sports car, is very similar to such showy, wasteful finery—it’s expense without obvious return, and makes the giver at least theoretically very sexy.

Note 20

A concise summary of Maslow’s hierarchy and related theories is available on the Web at http://www.valdosta.peachnet.edu/~whuitt/psy702/regsys/maslow.html, Maslow’s Hierarchy of Needs.

Note 21

However, demanding humility from leaders may be a more general characteristic of gift or abundance cultures. David Christie reports on a trip through the outer islands of Fiji:

In Fijian village chiefs, we observed the same sort of self-deprecating, low-key leadership style that you attribute to open source project leaders. [...] Though accorded great respect and of course all of whatever actual power there is in Fiji, the chiefs we met demonstrated genuine humility and often a saint-like acceptance of their duty. This is particularly interesting given that being chief is a hereditary role, not an elected position or a popularity contest. Somehow they are trained to it by the culture itself, although they are born to it, not chosen by their peers.

He goes on to emphasize that he believes the characteristic style of Fijian chiefs springs from the difficulty of compelling cooperation: a chief has no big carrot or big stick.

Note 22

As a matter of observable fact, people who found successful projects gather more prestige than people who do arguably equal amounts of work debugging and assisting with successful projects. An earlier version of this paper asked Is this a rational valuation of comparative effort, or is it a second-order effect of the unconscious territorial model we have adduced here? Several respondents suggested persuasive and essentially equivalent theories. The following analysis by Ryan Waldron puts the case well:

In the context of the Lockean land theory, one who establishes a new and successful project has essentially discovered or opened up new territory on which others can homestead. For most successful projects, there is a pattern of declining returns, so that after a while, the credit for contributions to a project has become so diffuse that it is hard for significant reputation to accrete to a late participant, regardless of the quality of his work.

For instance, how good a job would I have to do making modifications to the perl code to have even a fraction of the recognition for my participation that Larry, Tom, Randall, and others have achieved?

However, if a new project is founded [by someone else] tomorrow, and I am an early and frequent participant in it, my ability to share in the respect generated by such a successful project is greatly enhanced by my early participation therein (assuming similar quality of contributions). I reckon it to be similar to those who invest in Microoft stock early and those who invest in it later. Everyone may profit, but early participants profit more. Therefore, at some point I will be more interested in a new and successful IPO than I will be in participating in the continual increase of an existing body of corporate stock.

Ryan Waldron’s analogy can be extended. The project founder has to do a missionary sell of a new idea that may or may not be acceptable or of use to others. Thus the founder incurs something analogous to an IPO risk (of possible damage to their reputation), more so than others who assist with a project that has already garnered some acceptance by their peers. The founder’s reward is consistent despite the fact that the assistants may be putting in more work in real terms. This is easily seen as analogous to the relationship between risk and rewards in an exchange economy.

Other respondents have observed that our nervous system is tuned to perceive differences, not steady state. The revolutionary change evidenced by the creation of a new project is therefore much more noticeable than the cumulative effect of constant incremental improvement. Thus Linus is revered as the father of Linux, although the net effect of improvements by thousands of other contributors have done more to contribute to the success of the OS than one man’s work ever could.

Note 23

The phrase de-commoditizing is a reference to the Halloween Documents, http://www.opensource.org/halloween/, in which Microsoft used de-commoditize quite frankly to refer to their most effective long-term strategy for maintaining an exploitative monopoly lock on customers.

Note 24

A respondent points out that the valus surrounding the You’re not a hacker until other hackers call you a hacker norm parallel ideals professed (if not always achieved) by other meritocratic brotherhoods within social elites sufficiently wealthy to escape the surrounding scarcity economy. In the medieval European ideal of knighthood, for example, the aspiring knight was expected to fight for the right; to seek honor rather than gain; to take the side of the weak and oppressed; and to constantly seek challenges that tested his prowess to the utmost. In return, the knight-aspirant could regard himself (and be regarded by others) as among the best of the best—but only after his skill and virtue had been admitted and ratified by other knights. In the knightly ideal extolled by the Arthurian tales and Chansons de Geste we see a mix of idealism, continual self-challenge, and status-seeking similar to that which animates hackers today. It seems likely that similar values and behavioral norms should evolve around any skill that both requires great dedication and confers a kind of power.

Note 25

The Free Software Foundation’s main website carries http://www.gnu.org/philosophy/motivation.html, an article that summarizes the results of many of these studies. The quotes in this essay are excerpted from there.

Bibliography

Note 26

Miller, William Ian; Bloodtaking and Peacemaking: Feud, Law, and Society in Saga Iceland; University of Chicago Press 1990, ISBN 0-226-52680-1. A fascinating study of Icelandic folkmoot law, which both illuminates the ancestry of the Lockean theory of property and describes the later stages of a historical process by which custom passed into customary law and thence to written law.

Note 27

Malaclypse the Younger; Principia Discordia, or How I Found Goddess and What I Did To Her When I Found Her; Loompanics, ISBN 1-55950-040-9. There is much enlightening silliness to be found in Discordianism. Amidst it, the SNAFU principle provides a rather trenchant analysis of why command hierarchies don’t scale well. There’s a browseable HTML version, http://www.cs.cmu.edu/~tilt/principia/.

Note 28

J. Barkow, L. Cosmides, and J. Tooby (Eds.); The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press 1992. An excellent introduction to evolutionary psychology. Some of the papers bear directly on the three cultural types I discuss (command/exchange/gift), suggesting that these patterns are wired into the human psyche fairly deep.

Note 29

Goldhaber, Michael K.; http://www.firstmonday.dk/issues/issue2_4/goldhaber, The Attention Economy and the Net. I discovered this paper after my version 1.7. It has obvious flaws (Goldhaber’s argument for the inapplicability of economic reasoning to attention does not bear close examination), but Goldhaber nevertheless has funny and perceptive things to say about the role of attention-seeking in organizing behavior. The prestige or peer repute I have discussed can fruitfully be viewed as a particular case of attention in his sense.

Note 30

I have summarized the history of the hacker culture in Chapter 1, http://www.tuxedo.org/~esr/faqs/hacker-hist.html. The book that will explain it really well remains to be written, probably not by me.

Acknowledgements

Robert Lanphier contributed much to the discussion of egoless behavior. Eric Kidd highlighted the role of valuing humility in preventing cults of personality. The section on global effects was inspired by comments from Daniel Burn . Mike Whitaker inspired the main thread in the section on acculturation. Chris Phoenix pointed out the importance of the fact that hackers cannot gain reputation by doing other hackers down. A.J. Venter pointed out parallels with the medieval ideal of knighthood. Ian Lance Taylor sent careful criticisms of the reputation-game model which motivated me to think through and explain my assumptions more clearly.

The Magic Cauldron

Notes

Note 31

The underprovision problem would in fact scale linearly with number of users if we assumed programming talent to be uniformly distributed in the project user population as it expands over time. This is not, however, the case.

The incentives discussed in Chapter 3 (and some more conventionally economic ones as well) imply that qualified people tend to seek projects that match their interests, as well as the projects seeking them. Accordingly, theory suggests (and experience tends to confirm) that the most valuable (most qualified and motivated) people tend to discover the projects for which they fit well relatively early in the projects’ life cycles, with a corresponding fall-off later on.

Hard data are lacking, but on the basis of experience I strongly suspect the assimilation of talent over a growing project’s lifetime tends to follow a classical logistic curve.

Note 32

Shawn Hargreaves has written a good analysis of the applicability of open-source methods to games; Playing the Open Source Game (http://www.talula.demon.co.uk/games.html).

Note 33

Note for accountants: the argument that service costs will eventually swamp a fixed up-front price still works if we move from constant dollars to discounted present value, because future sale revenue discounts in parallel with future service costs.

A similar but more sophisticated counter to the argument is to observe that, per-copy, service cost will go to zero when the buyer stops using the software; therefore you can still win, if the user stops before he/she has generated too much service cost. This is basically just another form of the argument that factory pricing rewards the production of shelfware. Perhaps a more instructive way to put it would be that the risk that your service costs will swamp the purchase revenue rises with the expected period of usefulness of the software. Thus, the factory model penalizes quality.

Note 34

Wayne Gramlich () has proposed that the persistence of the factory model is partly due to antiquated accounting rules, formulated when machines and buildings were more important and people less so. Software company books show show the computers, office furniture, and buildings as assets and the programmers are expenses. Or course, in reality, the programmers are the true assets and the computers, office equipment, and buildings hardly matter at all. This perverse valuation is sustained by IRS and stockmarket pressure for stable and uniform accounting rules that reduce the complexity of assigning a dollar figure to the company’s value. The resulting drag has prevented the rules from keeping up with reality.

On this view, pinning a high price to the bits in the product (independent of future service value) is partly a sort of defense mechanism, a way of agreeing for all parties involved to pretend that the ontological ground hasn’t fallen out from under the standard accounting rules.

(Gramlich also points out that these rules underpin the bizarre and often self-destructive acquisition sprees that many software companies tear off on after IPO. Usually the software company issues some additional stock to build up a war chest. But they can’t spend any of this money to beef up their programming staff, because the accounting rules would show that as increased expenses. Instead, the newly public software company has to grow by acquiring other software companies, because the accounting rules let you treat the acquisition as an investment.)

Note 35

For a paradigmatic example of forking following defection, consult the history of OpenSSH. This project was belatedly forked from the an early version of SSH (Secure Shell) after the latter went to a closed license.

Bibliography

Note 36

The Cathedral and the Bazaar, http://www.tuxedo.org/~esr/writings/cathedral-bazaar/

Note 37

Homesteading the Noosphere, http://www.tuxedo.org/~esr/writings/homesteading/

Note 38

De Marco and Lister, Peopleware: Productive Projects and Teams (New York; Dorset House, 1987; ISBN 0-932633-05-6)

Acknowledgements

Several stimulating discussions with David D. Friedman helped me refine the inverse commons model of open-source cooperation. I am also indebted to Marshall van Alstyne for pointing out the conceptual importance of rivalrous information goods. Ray Ontko of the Indiana Group supplied helpful criticism. A good many people in audiences before whom I gave talks in the year leading up to June 1999 also helped; if you’re one of those, you know who you are.

It’s yet another testimony to the open-source model that this essay was substantially improved by email feedback I received within days after initial release. Lloyd Wood pointed out the importance of open-source software being future-proof. and Doug Dante reminded me of the Free the Future business model. A question from Adam Moorhouse led to the discussion of exclusion payoffs. Lionel Oliviera Gresse gave me a better name for one of the business models. Stephen Turnbull slapped me silly about careless handling of free-rider effects. Anthony Bailey and Warren Young corrected some facts in the Doom case study. Eric W. Sink contributed the insight that the factory model rewards shelfware.

For Further Reading:

The beginnings of an academic analytical literature on open source have begun to appear. Related material on the Web can be found at the authors web page (http://www.tuxedo.org/~esr/writings/cathedral-bazaar) for material to this book.

Ross Anderson, How to Cheat at the Lottery (or, Massively Parallel Requirements Engineering). In this insightful, lucid and entertaining paper, the author presents the results of an experiment in applying bazaar-style parallelism not to coding but to the requirements analysis and system design for a difficult problem in computer security.

Available as http://www.cl.cam.ac.uk/~rja14/lottery/lottery.html.

Davis Baird, “Scientific Instrument Making, Epistemology, and the Conflict between Gift and Commodity Economies,” in Journal of the Society for Philosophy & Technology, Volume 2, Numbers 3-4. This paper is interesting because, although it never refers to software or open source and is founded in earlier anthropological literature on gift cultures, it suggests an analysis similar in many respects to that in Homesteading the Noosphere.

Available on the Web at http://scholar.lib.vt.edu/ejournals/SPT/v2_n3n4html/baird.html.

Asif Khalak, Evolutionary Model for Open Source Software: Economic Impact. The author attempts to model open-source market penetration analytically and to use computer simulation to examine the mosel’s dependence on various cost and behavioral parameters. Presented at Genetic and Evolutionary Computation Conference, Ph.D Workshop, July 1999.

Available as http://web.mit.edu/asif/www/ace.html.

Bojidar Mantarow, Open Source Software as a New Business Model. The author treats Red Hat Software as a case study in the effects of lowering barriers to entry in a mature market. This dissertation was submitted in partial fulfillment of the degree of MSc in International Management at University of Reading, August 1999.

Available as http://www.lochnet.net/bozweb/academic/dissert.htm.

Eben Moglen, Anarchism Triumphant: Free Software and the Death of Copyright. This paper (originally published in the Columbia Law Review) contains a regrettably large number of errors in facts and logic, and the analytical content is very nearly smothered under misguided political polemic. Nevertheless, it is an entertaining and provocative read, worth plowing through if only for the context of Moglen’s unforgettable corollary to Faraday’s Law: Wrap the Internet around every brain on the planet and spin the planet. Software flows in the wires.

Available as http://old.law.columbia.edu/my_pubs/anarchism.html.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.152.71