Appendix A. The Last of the True Hackers

Around the time of Ken Williams’ housewarming party, twenty-five years after the MIT Tech Model Railroad Club discovered the TX-0, a man who called himself the last true hacker sat in a room on the ninth floor of Tech Square—a room cluttered with printouts, manuals, a bedroll, and a blinking computer terminal connected to a direct descendant of the PDP-6, a DEC-20 computer. His name was Richard Stallman, and he spoke in a tense, high-pitched voice that did not attempt to veil the emotion with which he described, in his words, “the rape of the artificial intelligence lab.” He was thirty years old. His pale complexion and scraggly dark hair contrasted vividly with the intense luminescence of his deep green eyes. The eyes moistened as he described the decay of the Hacker Ethic at Tech Square.

Richard Stallman had come to MIT twelve years before, in 1971, and had experienced the epiphany that others had enjoyed when they discovered that pure hacker paradise, the Tech Square monastery where one lived to hack, and hacked to live. Stallman had been entranced with computers since high school. At camp one summer, he had amused himself with computer manuals borrowed from his counselors. In his native Manhattan, he found a computing center to exercise his new passion. By the time he entered Harvard he was an expert at assembly languages, operating systems, and text editors. He had also found that he had a deep affinity for the Hacker Ethic and was militant in his execution of its principles. It was a search for an atmosphere more compatible with hacking that brought him from Harvard’s relatively authoritarian computing center, down Massachusetts Avenue, to MIT.

The thing he liked about the AI lab at Tech Square was that “there were no artificial obstacles, things that are insisted upon that make it hard for people to get any work done—things like bureaucracy, security, refusals to share with other people.” He also loved being with people for whom hacking was a way of life. He recognized that his personality was unyielding to the give-and-take of common human interaction. On the ninth floor he could be appreciated for his hacking and be part of a community built around that magical pursuit.

His wizardry soon became apparent, and Russ Noftsker, the administrator of the AI lab who had taken the tough security measures during the Vietnam protests, hired Stallman as a systems programmer. Richard was often in night phase, and when the people in the lab discovered after the fact that he was simultaneously earning a magna cum laude degree in physics at Harvard, even those master hackers were astonished.

As he sat at the feet of such as Richard Greenblatt and Bill Gosper, whom he considered his mentor, Stallman’s view of the Hacker Ethic solidified. He came to see the lab as the embodiment of that philosophy; a constructive anarchism which, as Stallman wrote into a computer file once, “does not mean advocating a dog-eat-dog jungle. American society is already a dog-eat-dog jungle, and its rules maintain it that way. We [hackers] wish to replace those rules with a concern for constructive cooperation.”

Stallman, who liked to be called by his initials, RMS, in tribute to the way he logged on to the computer, used the Hacker Ethic as a guiding principle for his best-known work, an editing program called EMACS which allowed users to limitlessly customize it—its wide-open architecture encouraged people to add to it, improve it endlessly. He distributed the program free to anyone who agreed to his one condition: “that they give back all extensions they made, so as to help EMACS improve. I called this arrangement ‘the EMACS commune,’” RMS wrote. “As I shared, it was their duty to share; to work with each other rather than against.”

EMACS became almost a standard text editor in university computer science departments. It was a shining example of what hacking could produce.

But as the seventies progressed, Richard Stallman began to see changes in his beloved preserve. The first incursion was when passwords were assigned to Officially Sanctioned Users, and unauthorized users were kept off the system. As a true hacker, RMS despised passwords and was proud of the fact that the computers he was paid to maintain did not use them. But the MIT computer science department (run by different people than the AI lab) decided to install security on its machine.

Stallman campaigned to eliminate the practice. He encouraged people to use the “Empty String” password—a carriage return instead of a word. So when the machine asked for your password, you would hit the RETURN key and be logged on. Stallman also broke the computer’s encryption code and was able to get to the protected file which held people’s passwords. He started sending people messages which would appear on screen when they logged onto the system:

I see you chose the password [such and such]. I suggest that you switch to the password “carriage return.” It’s much easier to type, and also it stands up to the principle that there should be no passwords.

“Eventually I got to a point where a fifth of all the users on the machine had the Empty String password,” RMS later boasted.

Then the computer science laboratory installed a more sophisticated password system on its other computer. This one was not so easy for Stallman to crack. But Stallman was able to study the encryption program, and, as he later said, “I discovered that changing one word in that program would cause it to print out your password on the system console as part of the message that you were logging in.” Since the “system console” was visible to anyone walking by, and its messages could easily be accessed by any terminal, or even printed out in hard copy, Stallman’s change allowed any password to be routinely disseminated by anyone who cared to know it. He thought the result “amusing.”

Still, the password juggernaut rolled on. The outside world, with its affection for security and bureaucracy, was closing in. The security mania even infected the holy AI computer. The Department of Defense was threatening to take the AI machine off the ARPAnet network—to separate the MIT people from the highly active electronic community of hackers, users, and plain old computer scientists around the country—all because the AI lab steadfastly refused to limit access to its computers. DOD bureaucrats were apoplectic: anyone could walk in off the street and use the AI machine, and connect to other locations in the Defense Department network! Stallman and others felt that was the way it should be. But he came to understand that the number of people who stood with him was dwindling. More and more of the hard-core hackers were leaving MIT, and many of the hackers who had formed the culture and given it a backbone by their behavior were long gone.

What had happened to the hackers of yesteryear? Many had gone to work for businesses, implicitly accepting the compromises that such work entailed. Peter Samson, the TMRC hacker who was among the first to discover the TX-0, was in San Francisco, still with the Systems Concepts company cofounded by master phone hacker Stew Nelson. Samson could explain what had happened: “[Hacking] now competes for one’s attention with real responsibilities—working for a living, marrying, having a child. What I had then that I don’t have now is time, and a certain amount of physical stamina.” It was a common conclusion, more or less shared by people like Samson’s TMRC colleague Bob Saunders (working for Hewlett-Packard, two children in high school), David Silver (after growing up in the AI lab, he now headed a small robotics firm in Cambridge), Slug Russell (the author of Spacewar was programming for a firm outside of Boston and playing with his Radio Shack home computer), and even Stew Nelson, who despite remaining in Bachelor Mode complained that in 1983 he wasn’t able to hack as much as he’d like. “It’s almost all business these days, and we don’t have that much time for the technical stuff we’d like to do,” said the man who over two decades ago had instinctively used the PDP-1 to explore the universe that was the phone system.

There would never be another generation like them; Stallman realized this every time he saw the behavior of the new “tourists” taking advantage of the freedom of the AI computer. They did not seem as well intentioned or as eager to immerse themselves into the culture as their predecessors. In previous times, people seemed to recognize that the open system was an invitation to do good work and improve yourself to the point where you might one day be considered a real hacker. Now, some of these new users could not handle the freedom to poke around a system with everyone’s files open to them. “The outside world is pushing in,” Stallman admitted. “More and more people come in having used other computer systems. Elsewhere, it’s taken for granted that if anybody else can modify your files, you’ll be unable to do anything, you’ll be sabotaged every five minutes. Fewer and fewer people are around who grew up here the old way, and know that it’s possible, and it’s a reasonable way to live.”

Stallman kept fighting, trying, he said, “to delay the fascist advances with every method I could.” Though his official systems programming duties were equally divided between the computer science department and the AI lab, he went “on strike” against the Lab for Computer Science because of their security policy. When he came out with a new version of his EMACS editor, he refused to let the computer science lab use it. He realized that in a sense he was punishing users of that machine rather than the people who made policy. “But what could I do?” he later said. “People who used that machine went along with the policy. They weren’t fighting. A lot of people were angry with me, saying I was trying to hold them hostage or blackmail them, which in a sense I was. I was engaging in violence against them because I thought they were engaging in violence to everyone at large.”

Passwords were not the only problem Richard Stallman had to face in what was becoming more and more a solitary defense of the pure Hacker Ethic at MIT. Many of the new people around the lab had learned computing on small machines and were untutored in hacker principles. Like Third-Generation hackers, they saw nothing wrong with the concept of ownership of programs. These new people would write exciting new programs just as their predecessors did, but something new would come along with them—as the programs appeared on the screen, so would copyright notices. Copyright notices! To RMS, who still believed that all information should flow freely, this was blasphemy. “I don’t believe that software should be owned,” he said in 1983, years too late. “Because [the practice] sabotages humanity as a whole. It prevents people from getting the maximum benefit out of the program’s existence.”

It was this kind of commercialism, in Richard Stallman’s view, that delivered the fatal blow to what was left of the idealistic community he had loved. It was a situation that embodied the evil, and immersed the remaining hackers into bitter conflict. It all began with Greenblatt’s LISP machine.

               • • • • • • • •

With the passing of years, Richard Greenblatt had remained perhaps the prime link to the days of ninth-floor hacker glory. In his mid-thirties now, the single-minded hacker of the Chess Machine and MacLISP was moderating some of his more extreme personal habits, grooming his short hair more often, varying his wardrobe more, and even tentatively thinking about the opposite sex. But he still could hack like a demon. And now he was beginning to see the realization of a dream he had formed long ago—a total, all-out hacker computer.

He had come to realize that the LISP language was extensible and powerful enough to give people the control to build and explore the kind of systems that could satisfy the hungriest hacker mentality. The problem was that no computer could easily handle the considerable demands that LISP put on a machine. So in the early seventies Greenblatt started to design a computer which would run LISP faster and more efficiently than any machine had done before. It would be a single-user machine—finally a solution to the esthetic problem of time sharing, where the hacker is psychologically frustrated by a lack of ultimate control over the machine. By running LISP, the language of artificial intelligence, the machine would be a pioneering workhorse of the next generation of computers, machines with the ability to learn; to carry on intelligent dialogues with the user on everything from circuit design to advanced mathematics.

So with a small grant, he and some other hackers—notably Tom Knight, who had been instrumental in designing (and naming) the Incompatible Time-sharing System—began work. It was slow going, but by 1975 they had what they called a “Cons” machine (named for the complicated “constructor operator” function that the machine performed in LISP). The Cons machine did not stand alone and had to be connected to the PDP-10 to work. It was two bays wide, with the circuit boards and the tangle of wires exposed, and they built it right there on the ninth floor of Tech Square, on the uplifted floor with air conditioning underneath.

It worked as Greenblatt hoped it would. “LISP is a very easy language to implement,” Greenblatt later explained. “Any number of times, some hacker goes off to some machine and works hard for a couple of weeks and writes a LISP. ‘See, I’ve got LISP.’ But there’s a hell of a difference between that and a really usable system.” The Cons machine, and later the stand-alone LISP machine, was a usable system. It had something called “virtual address space,” which assured that the space programs consumed wouldn’t routinely overwhelm the machine, as was the case in other LISP systems. The world you built with LISP could be much more intricate. A hacker working at the machine would be like a mental rocket pilot traveling in a constantly expanding LISP universe.

For the next few years they worked to get the machine to be a standalone. MIT was paying their salaries, and of course they were all doing systems work on ITS and random AI hacking, too. The break came when ARPA kicked in money for the group to build six machines for about fifty thousand dollars each. Then some other money came to build more machines.

Eventually the hackers at MIT would build thirty-two LISP machines. From the outside, the LISP computer looked like a central air conditioning unit. The visual action all occurred in a remote terminal, with a sleek, long keyboard loaded with function keys and an ultra-high-resolution bit-mapped display. At MIT the idea was to connect several LISP machines in a network, so while each user had full control he could also be hacking as part of a community, and the values arising from a free flow of information would be maintained.

The LISP machine was a significant achievement. But Greenblatt realized that something beyond making a few machines and hacking on them would be necessary. This LISP machine was an ultimately flexible world-builder, an embodiment of the hacker dream . . . but its virtues as a “thinking machine” also made it a tool for America to maintain its technological lead in the artificial intelligence race with the Japanese. The LISP machine had implications bigger than the AI lab, certainly, and technology like this would be best disseminated through the commercial sector. Greenblatt: “I generally realized during this whole process that we [were] probably gonna start a company some day and eventually make these LISP machines commercially. [It was a] sooner-or-later-it’s-gonna-happen kind of thing. So as the machine got to be more complete we started poking around.”

That was how Russell Noftsker got into the situation. The former AI lab administrator had left his post under duress in 1973 and gone to California to go into business. Every so often he would come back to Cambridge and stop by the lab, see what the AI workers were up to. He liked the idea of LISP machines and expressed interest in helping the hackers form a company.

“Initially pretty much everyone was against him,” Greenblatt later recalled. “At the time that Noftsker left the lab, I was on considerably better terms with him than anyone else. Most of the people really hated this guy. He had done a bunch of things that were really very paranoid. But I said, ‘Well, give him a chance.’”

People did, but it soon became clear that Noftsker and Greenblatt had different ideas of what a company should be. Greenblatt was too much a hacker to accept a traditional business construct. What he wanted was something “towards the AI pattern.” He did not want a load of venture capital. He preferred a bootstrap approach, where the company would get an order for a machine, build it, then keep a percentage of the money and put it back into the company. He hoped that his firm could maintain a steady tie to MIT; he even envisioned a way where they could all remain affiliated with the AI lab. Greenblatt himself was loath to leave; he had firmly set out the parameters for his universe. While his imagination had free rein inside a computer, his physical world was still largely bounded by his cluttered office with terminal on the ninth floor and the room he had rented since the mid-sixties from a retired dentist (now deceased) and the dentist’s wife. He would travel all over the world to go to artificial intelligence conferences, but the discussions in these remote places would be continuations of the same technical issues he would debate in the lab, or in ARPAnet computer mail. He was very much defined by the hacker community, and though he knew that commercialization to some extent was necessary to spread the gospel of the LISP machine, he wanted to avoid any unnecessary compromise of the Hacker Ethic: like lines of code in a systems program, compromise should be bummed to the minimum.

Noftsker considered this unrealistic, and his point of view filtered down to the other hackers involved in the project. Besides Tom Knight, these included some young wizards who had not been around in the golden age of the ninth floor, and had a more pragmatic approach to what was called for. “My perception [of Greenblatt’s idea] was to start a company which made LISP machines in sort of a garage shop. It was clear that it was impractical,” Tom Knight later said. “The world just isn’t that way. There’s only one way in which a company works and that is to have people who are motivated to make money.”

Knight and the others perceived that Greenblatt’s model for a company was something like Systems Concepts in San Francisco, which included former MIT hackers Stewart Nelson and Peter Samson. Systems Concepts was a small-scale company, guided by a firm resolve not to have to answer to anyone holding purse strings. “Our initial goal was not necessarily to get infinitely rich,” explained cofounder Mike Levitt in 1983, “but to control our own destiny. We don’t owe anybody anything.” The MIT hackers, though, asked what the impact of Systems Concepts had been—after over a decade, they concluded, it was still small and not terribly influential. Knight looked at Systems Concepts—“Low-risk, don’t take any external funding, don’t hire anybody you don’t know, that mode,” he said. “Not going very far.” He and the others had a larger vision for a LISP machine company.

Russ Noftsker also saw, and exploited, the fact that many of the hackers were reluctant to work in a company led by Greenblatt. Greenblatt was so focused on making LISP machines, on the mission of hacking, on the work that had to be done, that he often neglected to acknowledge people’s humanity. And as old-time hackers got older, this was more and more an issue. “Everyone tolerated him for his brilliance and productivity,” Noftsker later explained, “[but] finally he started using the bludgeon or cat-o’-nine-tails to try to whip people into shape. He’d berate people who weren’t used to it. He’d treat them like they were some kind of production mule team. It finally got to the point where communications had broken down and they even took the extreme measure of moving off the ninth floor in order to get away from Richard.”

Things came to a head in a meeting in February 1979, when it was clear that Greenblatt wanted a hacker-style company and power to ensure that it remain so. It was an awkward demand, since for so long the lab had, as Knight put it, “been run on anarchistic principles, based on the ideal of mutual trust and mutual respect for the technical confidence of the people involved built up over many years.” But anarchism did not seem to be The Right Thing in this case. Nor, for many, was Greenblatt’s demand. “I couldn’t see, frankly, having him fulfilling a presidential role in a company that I was involved in,” said Knight.

Noftsker: “We were all trying to talk him out of it. We begged him to accept a structure where he would be equal to the rest of us and where we would have professional management. And he refused to do it. So we went around the room and asked every single person in the technical group if they would accept an organization that had any of the elements [that Greenblatt wanted]. And everyone said they would not participate in [such a] venture.”

It was a standoff. Most of the hackers would not go with Greenblatt, the father of the LISP machine. Noftsker and the rest said they would give Greenblatt a year to form his own company, but in somewhat less than a year they concluded that Greenblatt and the backers he managed to find for his LISP Machine Incorporated (LMI) were not “winning,” so they formed a heavily capitalized company called Symbolics. They were sorry to be making and selling the machines to which Greenblatt had contributed so much, but felt it had to be done. LMI people felt betrayed; whenever Greenblatt spoke of the split, his speech crawled to a slow mumble, and he sought ways to change the uncomfortable subject. The bitter schism was the kind of thing that might happen in business or when people invested emotion in relationships and human interaction, but it was not the kind of thing you saw in the hacking life.

The AI lab became a virtual battleground between two sides, and the two firms, especially Symbolics, hired away many of the lab’s remaining hackers. Even Bill Gosper, who had been working at Stanford and Xerox during that time, eventually joined the new research center Symbolics had formed in Palo Alto. When Symbolics complained about the possible conflict of interest of LMI people working for the AI lab (it felt that MIT, by paying salaries to those LMI part-timers, was funding their competitor), the hackers still affiliated with the lab, including Greenblatt, had to resign.

It was painful for everybody, and when both companies came out with similar versions of LISP machines in the early 1980s it was clear that the problem would be there for a long time. Greenblatt had made some compromises in his business plan—making, for example, a deal whereby LMI got money and support from Texas Instruments in exchange for a fourth of the stock—and his company was surviving. The more lavish Symbolics had hired the cream of hackerism and had even signed a contract to sell its machines to MIT. The worst part was that the ideal community of hackers, those people who, in the words of Ed Fredkin, “kind of loved each other,” were no longer on speaking terms. “I’d really like to talk to [Greenblatt],” said Gosper, speaking for many Symbolics hackers who had virtually grown up with the most canonical of hackers and now were cut off from his flow of information. “I don’t know how happy or unhappy he is with me for having thrown in with the bad guys here. But I’m sorry, I’m afraid they were right this time.”

But even if people in the companies were speaking to each other, they could not talk about what mattered most—the magic they had discovered and forged inside the computer systems. The magic was now a trade secret, not for examination by competing firms. By working for companies, the members of the purist hacker society had discarded the key element in the Hacker Ethic: the free flow of information. The outside world was inside.

               • • • • • • • •

The one person who was most affected by the schism, and its effect on the AI lab, was Richard Stallman. He grieved at the lab’s failure to uphold the Hacker Ethic. RMS would tell strangers he met that his wife had died, and it would not be until later in the conversation that the stranger would realize that this thin, plaintive youngster was talking about an institution rather than a tragically lost bride.

Stallman later wrote his thoughts into the computer:

It is painful for me to bring back the memories of this time. The people remaining at the lab were the professors, students, and nonhacker researchers, who did not know how to maintain the system, or the hardware, or want to know. Machines began to break and never be fixed; sometimes they just got thrown out. Needed changes in software could not be made. The non-hackers reacted to this by turning to commercial systems, bringing with them fascism and license agreements. I used to wander through the lab, through the rooms so empty at night where they used to be full and think, “Oh my poor AI lab! You are dying and I can’t save you.” Everyone expected that if more hackers were trained, Symbolics would hire them away, so it didn’t even seem worth trying . . . the whole culture was wiped out . . .

Stallman bemoaned the fact that it was no longer easy to drop in or call around dinnertime and find a group eager for a Chinese dinner. He would call the lab’s number, which ended in 6765 (“Fibonacci of 20,” people used to note, pointing out a numerical trait established early on by some random math hacker), and find no one to eat with, no one to talk with.

Richard Stallman felt he had identified the villain who destroyed the lab: Symbolics. He took an oath: “I will never use a Symbolic LISP machine or help anybody else to do so . . . I don’t want to speak to anyone who works for Symbolics or the people who deal with them.” While he also disapproved of Greenblatt’s LMI company, because as a business it sold computer programs which Stallman believed the world should have for free, he felt that LMI had attempted to avoid hurting the AI lab. But Symbolics, in Stallman’s view, had purposely stripped the lab of its hackers in order to prevent them from donating competing technology to the public domain.

Stallman wanted to fight back. His field of battle was the LISP operating system, which originally was shared by MIT, LMI, and Symbolics. This changed when Symbolics decided that the fruits of its labor would be proprietary; why should LMI benefit from improvements made by Symbolics hackers? So there would be no sharing. Instead of two companies pooling energy toward an ultimately featureful operating system, they would have to work independently, expending energy to duplicate improvements.

This was RMS’s opportunity for revenge. He set aside his qualms about LMI and began cooperating with that firm. Since he was still officially at MIT and Symbolics installed its improvements on the MIT machines, Stallman was able to carefully reconstruct each new feature or fix of a bug. He then would ponder how the change was made, match it, and present his work to LMI. It was not easy work, since he could not merely duplicate the changes—he had to figure out innovatively different ways to implement them. “I don’t think there’s anything immoral about copying code,” he explained. “But they would sue LMI if I copied their code, therefore I have to do a lot of work.” A virtual John Henry of computer code, RMS had single-handedly attempted to match the work of over a dozen world-class hackers, and managed to keep doing it during most of 1982 and almost all of 1983. “In a fairly real sense,” Greenblatt noted at the time, “he’s been outhacking the whole bunch of them.”

Some Symbolics hackers complained not so much because of what Stallman was doing, but because they disagreed with some of the technical choices Stallman made in implementation. “I really wonder if those people aren’t kidding themselves,” said Bill Gosper, himself torn between loyalty to Symbolics and admiration for Stallman’s master hack. “Or if they’re being fair. I can see something Stallman wrote, and I might decide it was bad (probably not, but someone could convince me it was bad), and I would still say, ‘But wait a minute—Stallman doesn’t have anybody to argue with all night over there. He’s working alone! It’s incredible anyone could do this alone!’”

Russ Noftsker, president of Symbolics, did not share Greenblatt’s or Gosper’s admiration. He would sit in Symbolics’ offices, relatively plush and well decorated compared to LMI’s ramshackle headquarters a mile away, his boyish face knotting with concern when he spoke of Stallman. “We develop a program or an advancement to our operating system and make it work, and that may take three months, and then under our agreement with MIT, we give that to them. And then [Stallman] compares it with the old ones and looks at that and sees how it works and reimplements it [for the LMI machines]. He calls it reverse engineering. We call it theft of trade secrets. It does not serve any purpose at MIT for him to do that because we’ve already given that function out [to MIT]. The only purpose it serves is to give that to Greenblatt’s people.”

Which was exactly the point. Stallman had no illusions that his act would significantly improve the world at large. He had come to accept that the domain around the AI lab had been permanently polluted. He was out to cause as much damage to the culprit as he could. He knew he could not keep it up indefinitely. He set a deadline to his work: the end of 1983. After that he was uncertain of his next step.

He considered himself the last true hacker left on earth. “The AI lab used to be the one example that showed it was possible to have an institution that was anarchistic and very great,” he would explain. “If I told people it’s possible to have no security on a computer without people deleting your files all the time and no bosses stopping you from doing things, at least I could point to the AI lab and say, ‘Look, we are doing it. Come use our machine! See!’ I can’t do that anymore. Without this example, nobody will believe me. For a while we were setting an example for the rest of the world. Now that this is gone, where am I going to begin from? I read a book the other day. It’s called Ishi, the Last Yahi. It’s a book about the last survivor of a tribe of Indians, initially with his family, and then gradually they died out one by one.”

That was the way Richard Stallman felt. Like Ishi.

“I’m the last survivor of a dead culture,” said RMS. “And I don’t really belong in the world anymore. And in some ways I feel I ought to be dead.”

Richard Stallman did leave MIT, but he left with a plan: to write a version of the popular proprietary computer operating system called UNIX and give it away to anyone who wanted it. Working on this GNU (which stood for “Gnu’s Not Unix”) program meant that he could “continue to use computers without violating [his] principles.” Having seen that the Hacker Ethic could not survive in the unadulterated form in which it had formerly thrived at MIT, he realized that numerous small acts like his would keep the Ethic alive in the outside world.

               • • • • • • • •

What Stallman did was to join a mass movement of real-world hackerism set in motion at the very institution which he was so painfully leaving. The emergence of hackerism at MIT twenty-five years before was a concentrated attempt to fully ingest the magic of the computer; to absorb, explore, and expand the intricacies of those bewitching systems; to use those perfectly logical systems as an inspiration for a culture and a way of life. It was these goals which motivated the behavior of Lee Felsenstein and the hardware hackers from Albuquerque to the Bay Area. The happy byproduct of their actions was the personal computer industry, which exposed the magic to millions of people. Only the tiniest percentage of these new computer users would experience that magic with the all-encompassing fury of the MIT hackers, but everyone had the chance to...and many would get glimpses of the miraculous possibilities of the machine. It would extend their powers, spur their creativity, and teach them something, perhaps, of the Hacker Ethic, if they listened.

As the computer revolution grew in a dizzying upward spiral of silicon, money, hype, and idealism, the Hacker Ethic became perhaps less pure, an inevitable result of its conflict with the values of the outside world. But its ideas spread throughout the culture each time some user flicked the machine on, and the screen came alive with words, thoughts, pictures, and sometimes elaborate worlds built out of air—those computer programs which could make any man (or woman) a god.

Sometimes the purer pioneers were astounded at their progeny. Bill Gosper, for instance, was startled by an encounter in the spring of 1983. Though Gosper worked for the Symbolics company and realized that he had sold out, in a sense, by hacking in the commercial sector, he was still very much the Bill Gosper who once sat at the ninth-floor PDP-6 like some gregarious alchemist of code. You could find him in the wee hours in a second-floor room near El Camino Real in Palo Alto, his beat-up Volvo the only car in the small lot outside the nondescript two-story building that housed Symbolics’ West Coast research center. Gosper, now forty, his sharp features hidden behind large wireframe glasses and his hair knotted in a ponytail which came halfway down his back, still hacked LIFE, watching with rollicking amusement as the terminal of his LISP machine cranked through billions of generations of LIFE colonies.

“I had the most amazing experience when I went to see Return of the Jedi,” Gosper said. “I sat down next to this kid of fifteen or sixteen. I asked him what he did, and he said, ‘Oh, I’m basically a hacker.’ I almost fell over. I didn’t say anything. I was completely unprepared for that. It sounded like the most arrogant thing I ever heard.”

The youngster had not been boasting, of course, but describing who he was. Third-Generation hacker. With many more generations to follow.

To the pioneers like Lee Felsenstein, that continuation represented a goal fulfilled. The designer of the Sol and the Osborne 1, the cofounder of Community Memory, the hero of the pseudo-Heinlein novel of his own imagination often would boast that he had been “present at the creation,” and he saw the effects of the boom that followed at a close enough range to see its limitations and its subtle, significant influence. After he made his paper fortune at Osborne, he saw it flutter away just as quickly, as poor management and arrogant ideas about the marketplace caused Osborne Computer to collapse within a period of a few months in 1983. He refused to mourn his financial loss. Instead he took pride in celebrating that “the myth of the megamachine bigger than all of us [the evil Hulking Giant, approachable only by the Priesthood] has been laid to rest. We’re able to come back down off worship of the machine.”

Lee Felsenstein had learned to wear a suit with ease, to court women, to charm audiences. But what mattered was still the machine and its impact on people. He had plans for the next step. “There’s more to be done,” he said not long after Osborne Computer went down. “We have to find a relationship between man and machine which is much more symbiotic. It’s one thing to come down from one myth, but you have to replace it with another. I think you start with the tool: the tool is the embodiment of the myth. I’m trying to see how you can explain the future that way, create the future.”

He was proud that his first battle—to bring computers to the people—had been won. Even as he spoke, the Third Generation of hackers was making news, not only as superstar game designers, but as types of culture heroes who defied boundaries and explored computer systems. A blockbuster movie called WarGames had as its protagonist a Third-Generation hacker who, having no knowledge of the groundbreaking feats of Stew Nelson or Captain Crunch, broke into computer systems with the innocent wonder of their Hands-On Imperative. It was one more example of how the computer could spread the Ethic.

“The technology has to be considered as larger than just the inanimate pieces of hardware,” said Felsenstein. “The technology represents inanimate ways of thinking, objectified ways of thinking. The myth we see in WarGames and things like that is definitely the triumph of the individual over the collective dis-spirit. [The myth is] attempting to say that the conventional wisdom and common understandings must always be open to question. It’s not just an academic point. It’s a very fundamental point of, you might say, the survival of humanity, in a sense that you can have people [merely] survive, but humanity is something that’s a little more precious, a little more fragile. So that to be able to defy a culture which states that ‘Thou shalt not touch this,’ and to defy that with one’s own creative powers is . . . the essence.”

The essence, of course, of the Hacker Ethic.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.193.158