23

Text, Translation, and the End of the Unified Press

David Alan Grier

ABSTRACT

Few industries were as thoroughly transformed by computerization as the newspaper business. Computer composition and computer typesetting restructured the basic organization of the newspaper, refactored control, and encouraged the flourishing newspaper chains of the 1970s and early 1980s. However, electronic distribution radically changed the relationship between advertiser and publisher, and thereby started a decline of the traditional regional paper. As publishers adopted computer systems to manage the flow of information, they generally saw the new technology not as something that would expand the scale or scope of their work but as tools that would translate information from one form into another.

Introduction

The early computer designers, Neumann, Eckert, Mauchly, Attanasoff, Aiken, and Zuse, gave suggestion that their ideas would have application to the production and dissemination of news. During the period that marked the foundational work on computing machines, which began roughly in 1936 and extended until about 1948, they described their machines as computing devices. Konrad Zuse, who worked in Germany at the start of World War II, was typical of the group when he wrote that his machine “serves the purpose of automatic execution by a computer of frequently recurring computations” (Zuse, 1982, p. 163). Even John von Neumann, who probably had the best grasp of the universal nature of computing, still described, in the earliest documents, computing systems that “can carry out instructions to perform calculations of considerable order of complexity” (Neumann, 1945).

All of these pioneers were building machines to solve specific scientific problems that required substantial numerical computation: the design of vacuum tubes, shock wave propagation, ballistics tables, systems of linear equations. They had little reason to speculate about potential applications of their technology beyond science and engineering. At the same time, the foundational news-gathering institution, the newspaper, was firmly identified with different technology. It was know as the Press. That title was enshrined in the US Constitution, which guaranteed the right to publish as “freedom of the press.” The newspaper had been identified with the technology of the printing press since the English Civil War of the seventeenth century, when partisans condemned their opponents for destroying printing machines. As Alexander Brome (1620–1666) wrote in the poem “June 1, 1643,” “And carefully muzzled the mouth of the press, / lest the truth should peep through their juggling dress. / For they knew a cessation would work them more harms, / Than Essex could do the cavaliers with his arms” (in Chalmers, 1810, p. 667).

When asked to assess the new computing technology, newspaper editors accepted the ideas that were offered to them by engineers and scientists. They concluded that computers were tools of national security and could be applied only to mathematical calculations. These machines, according to an early New York Times report, were destined for universities, military laboratories, and other organizations that might serve as “a consultation center for the research laboratories of science and industry” (Special to the New York Times, 1944).

As we have learned in subsequent years, computers were flexible and powerful devices that had much to offer newsgathering organizations. Step by step, they were adapted to activities that were of value to newspapers, activities that included not only the tasks of financial accounting and other numeric activities required by active businesses, but also the work of gathering, editing, printing, and distributing the stories that described the news. This process of adaptation largely involved the preparation of software that could manipulate text. To create this software, computer scientists had to develop a substantial body of knowledge that dealt with the mechanical nature of text and language. This body of knowledge was not developed in isolation. The needs of publishers informed the theories of the computer scientists just as much as the work of the computer scientists altered the operation of the newsroom.

Ultimately, computers became central features of newspapers, not replacing printing presses but changing their role in a news organization. Computers first appeared in the business offices of newspapers with software that could handle general ledgers, accounts payable, and subscription lists. They quickly advanced into typesetting, as programmers came to appreciate the mechanical nature of the work. From there, they moved to the editorial offices with word processing systems, the library with information systems, and finally the printing room with systems that controlled presses. Ultimately, newspapers learned to use software to distribute the news and track its consumption.

In each of these different roles, software took the role of translator, transforming information from one language (or representation) into another. In this role, the software systems of news operation moved beyond the original vision of computers as automatic machines that would provide an answer that matched a given set of instructions. Instead, they became a technology to transform a human organization by isolating the tasks that transformed information and mechanizing them.

Connection to Computing History

The history of newspaper computerization inverts the commonly told story of the computer. It emphasizes the impact of certain technical developments that changed the organizational structure and operation of media companies. Those technical developments, while important to a number of industries, are only small parts of the major themes of computing history. Traditionally, the story of computation is told by the commodification of computing hardware (Ceruzzi & ebrary, 2003) and the rising complexity of software (Campbell-Kelly, 2003).

In these stories, the computer begins as a specialized tool with the construction of the ENIAC at the University of Pennsylvania in 1946. From that point, the history moves through five stages that mark the increasing complexity and standardization of computer hardware. The first of these stages, in which each computer is a unique, handcrafted instrument, lasts until roughly 1954. In the next stage, commercial manufacturers produce fixed models of computers that are generally incompatible with each other. The third stage of hardware, which marks the start of common families of computers, begins in 1964 with the announcement of the IBM 360 family of machines. This period also marks the rise of the smaller, less expensive mini computer that brought the technology to a wider audience of businesses, schools, and other applications. The fourth stage, which begins in roughly 1974, is that of the microcomputer. This era is notable less for the size and cost of computing equipment, though both shrank rapidly during this period, than for the fact that these machines were all based on a family of Intel processor chips that had a fixed instruction set and hence were capable of running common software. The fifth stage, which begins in 1992, is the age of connectivity, in which computers operate more and more in concert, sharing both data and programs.

The story of software properly starts before the invention of the computer, as it draws heavily from the ideas of production management, but it started to be the dominant story in computation after the rise of common families of computers, such as the IBM 360, which built an environment that would encourage the commoditization of software.

Behind the growth of the software industry is another story of commoditization and standardization. It is generally seen as a response to the tight market for computer programmers that developed in the mid-1960s, an event that is often called the “programming crisis.” This market left many organizations without sufficient staff to create the software that would be necessary to support basic business tasks. The software industry addressed this crisis by allowing businesses and other organizations to purchase developed programs and hence operate with only a small programming staff. The start of this industry is generally dated to 1968, when NATO held a conference on software engineering and when IBM announced that they would no longer provide their customers with programming products without charge.

These histories of hardware and software only describe the framework of the computing age. Though these technologies brought common ideas and capabilities to the industrial landscape, they have different impacts on different organizations and for different modes of production. These changes often mirror the growth and commoditization of the underlying technology, but they are often marked by crises that occur when computing technology is sufficiently advanced and sufficiently inexpensive to be adopted by an entire industry, such as the newspaper industry.

Translating Information

For the purposes of analyzing the role of digital technology in news organizations, the computer age began in the summer of 1945, as World War II came to an end. This date falls about six months before the US government released the news of the computing machines that it developed during the war and a full year before the Moore School Lectures at the University of Pennsylvania, the summer seminars that first described the design and operation of the stored program computer (Campbell-Kelly & Williams, 1985).

During the summer of 1945, the nation's most prominent scientist described a machine that might have some value to the nation's newspapers. Writing in the Atlantic Monthly, Vannevar Bush described a machine that could store, organize, and retrieve information. Bush had led the Office of Scientific Research and Development during the war and had followed the development of electronic technology. He had designed a computing machine in the late 1920s to solve differential equations, a class of problems that were central to many fields of engineering (Bush, 1931). As the war came to an end, he started to speculate about the value of computing machines to other fields of endeavor.

“Consider a future device for individual use,” he wrote, “which is a sort of mechanized private file and library.” He viewed this machine as an “enlarged intimate supplement” to individual intelligence. It would be “a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility” (Bush, 1945). He suggested that his machine, which he called the Memex, could be built from microfilm readers, electronic circuitry, photoelectric tubes, and electric motors.

The Memex excited readers as it seemed to embody the best of scientific research. “Like Emerson's famous address of 1837 on ‘The American Scholar,’” proclaimed the editor of Atlantic Monthly, “this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge” (Bush, 1945).

In fact, the technology to build a Memex lay 40 to 50 years in the future (Simpson, Mylonas, & van Dam, 1996). More importantly, a practical Memex would require a new conception of the device. Bush conceived it as a monolithic tool like a printing press, a single machine that would dominate an organization. In that role, it came to embody the hopes and fears that the age held for computing technology. Fictionalized in the 1955 play Desk Set by William Marchant, it was the device that did intellectual labor and threatened to replace a staff of librarians in a mythical broadcasting network.

Academic researchers gave the Memex the more scholarly name of “Information System” and conceived of it as a repository. Though this new conception helped fit Bush's ideas into the growing field of computer science it also illustrated the limitations of the basic concept. “Strange as it may seem,” wrote one researcher, “the greatest bottleneck is likely to remain for a while the initial generation, typing, and keypunching (where needed) of the incoming information” (Salton, 1966).

As a group, newspaper owners were less interested in developing new technologies for repositories than in finding tools that reduced the costs of operation or expanded the scope of their audience. The publishers of 1945 knew that the social and economic standing of their profession had been declining over the prior two decades. Advertising revenue had declined 40% during the Great Depression. Some of the advertisers had gone bankrupt in the weak economy. Others had started promoting their wares on radio stations.

Radio had expanded during the 1930s. Radio could cover a larger area than most newspapers and did not see an increase in costs when their readership expanded. Furthermore, radio could distribute breaking news quickly without all the expense of stopping the presses, rewriting and resetting the front page, and distributing a special issue (Allen, 1942).

The decline in revenue was accompanied by a decline in influence and status. In 1920, newspapers were at the height of power and influence in the United States. That year, both the Democratic and Republican parties nominated presidential candidates who were newspaper publishers: Warren G. Harding (1865–1923) of the Marion Daily Star and James Cox (1870–1957) of the Dayton Daily News. In 1945, newspapers could no longer provide for that kind of public career.

Turning to the new technologies that were being developed for the war, publishers tested devices that would improve the operations of their organizations. “Throughout the newspaper profession and the printing industry,” explained one editor of the time, “there is an alert lookout for inventions that will reduce the mechanical costs of producing and distributing newspapers” (Garnett, 1942). Over the prior decade, many papers had adopted the simple invention of a network of boys who carried papers directly to readers' homes. “Newsstand circulation is still important in a few large cities,” explained one editor, “but even in these it is far less so than ten years ago, since the majority of today's newspapers are delivered direct to the family doorstep” (Robb, 1942).

In general, publishers found it easier to adopt small technological improvements than to invest in a major change to their operations. For three or four years, a number of US papers had been testing a telegraph technology that allowed reporters to typeset their copy from distant locations. Since it removed the need to retype the copy in at newspaper office, it was, according to one editor, a “very important economy in both time and money” (Harrison, 1940).

Even though telegraphic typesetting appeared to give reporters direct access to the typesetting machinery, it was actually a translation technology. A small machine in the newspaper office would take the text and a few codes and prepare a line of type. It replaced a human worker who would have taken the text from the telegraph and retyped it in the house style. It simplified the process so that reporters could translate their information without knowing all the details of typesetting or even seeing the finished product. After “a very short training,” argued an editor, “they make very good telegraphic-typesetting operators” (Harrison, 1940).

The telegraphic technology was not general enough to handle all kinds of text. It was generally employed only on financial reports. However, it identified the kind of tools that would be valued by a newspaper. It required only a small investment. It replaced a skilled worker. It transformed material from one form to another. It generated a substantial return. By some accounts, it reduced costs by 70% (Harrison, 1940).

A Theory of Language

The telegraphic typesetting devices were very modest machines that accomplished their tasks with a few levers and electrical actuators. To build more sophisticated machines, especially machines that could deal with the problems of preparing and editing text, computer scientists needed to better understand textual data. “A machine for scanning and selecting printed documents would have to be able to perform a series of functions,” explained a team of researchers. “First, it would have to recognize words, phrases,” and other structure in the text. It would need the ability to identify relationships among the textual data and modify the data. “The heart of the problem is how to define tasks to be performed by the digital equipment so that the mechanized searching and selecting system will provide optimum advantages” (Perry, Berry, Luehrs, & Kent, 1954).

In the late 1940s and early 1950s, computer scientists had only vague ideas about how they could program a machine to recognize words or search a text document or select some aspect of it. The basic theories behind this kind of work began to emerge in about 1955 as researchers began to work on the mechanical translation of one language into another. In the United States the government generously supported work to write a program that would translate Russian documents into English (Greibach, 1981).

The foundation of the new theory of translation was Noam Chomsky's analysis of the structure of language and his hierarchy of grammars. This hierarchy described a set of increasingly complex languages and showed how each set could be recognized by a certain kind of computer program. “We have seen certain indications that this approach may enable us to reduce the immense complexity of actual language to manageable proportions,” he explained. It might also “provide considerable insight into the actual use and understanding of language” (Chomsky, 1956).

Chomsky's hierarchy and the other ideas of formal language became the basis not only for the translation of one human language into another but also one computer language into another. Computer scientists found this work more amenable to the theory as they could design a language to fit Chomsky's ideas rather than try to adopt Chomsky's categories to an existing human language. Many computer problems fit a formal scheme of translation, including the translation of commercial transactions into an accounting format, the translation of one database format into another, or the translation of a computer program from a language that a human could understand into one that a machine could handle.

Though the ideas of formal language theory brought new power to these problems, they did not yet replace the heuristic tools that engineers would bring to a problem. In 1957, IBM researcher John Bachus largely ignored formal methods as he developed the programming language FORTRAN and the program that would translate FORTRAN into machine instructions. We “simply made up the language as we went along,” he admitted (Backus, 1979).

The early computer programs for typesetting reflected more of John Backus's approach to programming than the formal theories of Chomsky. In the 1960s and 1970s, many of these programs would have to be redeveloped with more formal tools. However, the products of the 1950s proved to be useful to computer scientists as they developed the technology of editing and typesetting.

In 1963, MIT researchers worked to develop a FORTRAN program that would translate text from an IBM 709 computer to a form that would be recognized by a Photon optical typesetting machine to an IBM 709 computer. The Photon is often described as a computerized typesetter but that overstates its abilities. It was not truly programmable but used digital circuits to control an optical device that could project letters on photosensitive paper. The text for the Photon came from an operator typing at an electric keyboard. It was developed in the early 1950s and was first installed at the Patriot Ledger in Quincy, Massachusetts in 1956. By 1963, at least seven other newspapers used the device to set type (Neiva, 1996).

The Photon was already beginning to put pressure on newspaper organizations. Since it could be operated by a relatively unskilled typist it removed the need for traditional typesetters, who worked with machines that cast the letters into hot lead. Ultimately, the Photon and similar machines would virtually eliminate the old typesetting jobs. (The period was 1969–1976, during which union linecasting jobs in the United States droped from 11,557 to 1,877; Cortada, 2006, vol. 2, p. 318.) At MIT, the researchers wanted to use the device “to photocompose conventional computer results of a numerical nature,” primarily numerical tables and reports. They also saw the possibility that the Photon would allow them “to use the computer to organize verbal and other material in routine ways, for subsequent photocomposition” (Barnett, Moss, Luce, & Kelley, 1963).

In many ways, the problem of connecting the two machines was straightforward, but it provided a new insight into the nature of language and translation. Instead of translating a paragraph from Russian into English or a computer program from FORTRAN into IBM 709 machine code, this project took text that would have been destined to a computer line printer and translated it into the language that the Photon could process by adding codes for layout, font styles, and typesizes – codes that a skilled operator would normally enter at the machine's keyboard. The MIT researchers noted that such problems “involve a mixing of data and ‘program’ in a fashion that is not so common in high speed computing.” They added that “Analogies can be sought for this mixing in other fields – stage instructions in a play, style comments and key changes in a musical score, control codes that set selectors in punched-card accounting – but very few in high speed computing” (Barnett et al., 1963).

The idea that programs contained both data and instructions was nothing new. The original documents that described stored program computers made the point that data and instructions could be indistinguishable and both could be modified by other programs (Neumann, 1945). Yet, in the early 1960s, programmers were just starting to understand the full ramifications of this idea. From this work came the idea that data did not really exist as static entities but were always connected to a program that could manipulate them.

During this period, several laboratories began to think about organizing objects that were both programs and data. The idea emerged at several different laboratories at roughly the same time. It is often connected to a group of Norwegian researchers who were trying to write programs that would simulate the actions of physical and social systems. It also appeared at a computer graphics laboratory at the University of Utah. It would eventually move to text processing. In organizing text, the actual characters and words would form the data (Kay, 1993; Holmevik, 1994). The codes that described the format would be the program.

Theory Into Practice

By the middle of the 1960s, computer scientists had developed much of the theory that they needed in order to manipulate text on a computer. Inside the computer, text was represented as numbers but those numbers could be manipulated like the letters and words that they represented. Bell Labs had been particularly active in developing methods for manipulating text. They created a computer language, called SNOBOL, expressly for this purpose. It allowed individuals to search for text, replace it, delete it, and do other operations that are now common (Farber, Griswold, & Polonsky, 1964).

Yet, SNOBOL was a language. It was not a system that could be easily used to prepare an article, much less a full newspaper. Among the problems that it did not address was the process of translating thoughts into words, of processing the ideas of the writer. At this point, most systems accepted input only from punched cards, paper tape, or magnetic tape. IBM offered a program, called Text/360, that took input from punched cards. This method, explained the developers, “has proven to be fast, tolerant of late inputs and reliable” (Ziegler, 1969). The program took the text from cards, formatted the words to fit on a specified size of paper, and allowed the user to add commands that manipulated the original words. In general, Text/360 instructions can cause the following operations to be performed: text deleted, replaced, or inserted; text copied from one location to another; a word or phrase replaced wherever it appears (Ziegler, 1969).

Bell Labs researchers had recognized that a well-designed program for textual manipulation would interact with people in familiar ways. It would have “a typewriter as good and as simple as an ordinary secretary's machine” that could be “located in the office of the user” (Mathews & Miller, 1965). Bell did not have that kind of facility in the mid-1960s. It required either a computer that could be devoted to a single user or a time-shared system, a large computer that serviced many people who used devices that resembled typewriters.

By 1967, text-processing programs could be found on the time-shared systems of both MIT and the University of California. “Such a program is called an editor,” explained a pair of Berkeley researchers, “and presupposes a reasonably large and powerful machine equipped with disk or drum storage” (Deutsch & Lampson, 1967). These editors worked on a teletype terminal, a keyboard that printed text on a roll of paper rather than some kind of electronic screen. They required writers to have a great deal of patience, discipline, and imagination. They had to keep a detailed picture of their text in their minds as they worked, as the program only allowed them to view small sections at a time (Kaiman, 1968). Still, such editors proved to be powerful and popular tools. They provided “more convenience than even the most elaborate keypunch can provide,” claimed the Berkeley computer scientists (Deutsch & Lampson, 1967).

During the final years of the 1960s, these editors were primarily used by computer scientists to create programs, but they were being adopted for use as publication tools. Typically, an editor would add formatting commands to the text and then feed the completed file into a program which operated much like the MIT Photon system and translated the file into a form that would produce text on the local printer. Several such systems appeared in 1967 and 1968, with MIT's Script system being the most prominent of the group (Madnick & Moulton, 1968).

By 1968, computer scientists had been able to encapsulate that formatting skill within a program and hence free the writer from the problems of transforming words into the appropriate form. “A specialist builds a layout file for use by others,” explained one developer, “and assigns to each convention a simple abbreviated notation to be inserted in the text.” The system even allowed complex documents to be assembled from collections of simple texts (Kaiman, 1968).

Over the next two years, researchers moved one step further and recognized that the structure of the document was more important than its format. A number in a script font might or might not be a reference to a footnote, where as the converse, a reference to a footnote, should always be a superscript number, provided that was the style for the document. Beginning in 1969, three researchers at IBM began work on a language that would describe the structure of a document. They viewed their work as a means of organizing textual material so that it could be searched and retrieved, as it might be in an information system like Bush's Memex. “The usefulness of a retrieval program can be affected by its ability to identify the structure and purpose of the parts of text,” explained the group leader. “A typesetting command language could convey such information, but present languages deal with the appearance of the text, with the purpose which motivated it” (Goldfarb, Mosher, & Peterson, 1970).

The result of their work was an idea called the Generalized Markup Language or GML. Using this language, a writer could identify sections of a document such as the title, the drop header, the author, the introduction, the conclusion, or anything similar. A database could translate this code into a form that made it easier to search through documents, a printing program could translate them into typesetting commands, and an editor could use them to modify the document.

Systems for Editing

The editorial and typesetting systems that appeared at the end of the 1960s made only limited use of the theoretical developments in textual manipulation, yet they radically changed the operation of many newspapers. Between 1969 and 1976, the industry lost 90% of its skilled typesetting workforce (Cortada, 2006, p. 318). This transformation of the workforce laid the groundwork for more sophisticated systems that would follow in subsequent decades.

Some sociologists, such as Wallace and Kalleberg (1982), have argued that the decline was not caused by the introduction of technology but by the “(trongly entrenched system of job property rights in the printing industry [that] limited capitalist jurisdiction in the workplace.” While an inflexible labor structure certainly contributed to the transformation of newspaper production, it was clearly a predisposing rather than an exciting cause. When publishers announced their design to replace hot metal typesetters with new phototypesetters, the unions fought the changes and were willing to force publishers out of business rather than settle the issue on anything but their terms (Raskin, 1974). These confrontations brought strikes and labor disruptions at major newspapers, including the Washington Post and New York Daily News (News Union Warns Members Working at Washington Post, 1975).

For the most part, the new typesetting systems of this era were closer to the original Photon typesetter of the 1950s than the computer systems of Berkeley and MIT of the 1960s. Reporters would prepare their stories on a typewriter or elementary computer editor, they would have it punched onto paper tape, and the tape would be fed into the typesetter. By mid-decade, the most sophisticated of these machines operated like a sophisticated word processor. It would consist of a computer, a disk memory, and a keyboard that could be used for entering or editing text (Marcus & Trimble, 2006).

The individual typesetting machines of the late 1960s were soon replaced by larger, unified systems that managed the full editorial process that began with the reporter's notes and ended with a page of typeset text. These systems were built around a mini-computer, the term that the age used for a small machine that could support 10–40 users, rather than the single user serviced by an optical typesetter.

The New York Daily News was one of the first papers to invest in a unified system. To build the system, it contracted with the Mergenthaler Company, which was shutting its production facility for hot metal Linotype typesetters and was expanding its digital product line. Lacking much of the necessary expertise to create such a system, Mergenthaler subcontracted the task to a pair of independent computer contractors (Marcus & Trimble, 2006).

The Mergenthaler system combined the formal language theory of academic computer science in a practical system that was designed to run a newspaper. The system was divided into a set of subsystems that mimicked the traditional structure of the newsroom. Each of these subsystems kept the basic data of the newspaper in a formal description. One system kept notes and stories. Another handled classified ads. A third kept display ads. A central database handled financial information. A production system prepared pages for the printing press. Each unit could translate information from one form to another as it moved through the production process (Marcus & Trimble, 2006).

By the early 1980s, such systems were common. Many commentators remarked on how such systems changed the look of the newsroom by replacing old typewriters with cathode ray tubes. More importantly, they altered the operation of the newspaper by replacing skilled workers who helped prepare the final paper. They ended the careers of typesetters, editorial assistants, press controllers, and even copyboys, the apprentices who controlled the flow of information through the editorial office (Snyder, 1995).

By standardizing and systematizing the flow of information, these systems made it easier to disperse editorial and production activities. Writers no longer needed to work in close proximity to their editors. Printing could be moved to the suburbs, where land was cheaper and access easier (Bergin, 2006). The New York Times was one of the first papers to move production out of its editorial offices. “While the New York Times is oriented to this great city,” explained the publisher, “we do it no service hauling 5,000 tons of newsprint a week into and out of the heart of Manhattan” (Kihss, 1975).

First Attempts at Electronic Distribution

The systems of the late 1970s and early 1980s were not yet unified systems that controlled every step of newspaper production. Most papers still relied on a skilled staff to produce a page layout and, of course, they relied on a physical network of trucks, bicycles, and delivery personnel to get the paper to the reader. However, many publishers believed that the technology of teletext might replace that delivery system. Teletext carried information on modified television signals to home receivers. A viewer could read the evening news as it scrolled up on their television (Griss, 1979).

“Skyrocketing growth in the market for new electronic systems in the home and the development of an electronic publishing industry will threaten the future growth of newspapers,” argued one trade journal in early 1980. The author saw such promise in the new technology that he predicted the closing of “some marginal newspapers,” and a decline in the number of pages in other papers. “At the very least,” he added, “the strong growth trend in newsprint demand will be brought to a halt, with the consequential grave effect on newsprint producers” (Teletext threat to newspapers, 1980).

Any enthusiasm for teletext was tempered by the fact that it was not quite a full system that would extend the production of the newsroom. It needed one more piece of software that would extract news stories from edited text, add the appropriate formatting commands for teletext, convert it to an analog signal, and send to the television transmitter. A second piece of software at the television receiver would decode the text and control the output. “We are only waiting the arrival of an economical set top teletext adaptor and an entrepreneurial type news service that wants to bring this new information medium to the American public” (Griss, 1979). The technology attracted some substantial investment, including funds from the Knight-Ridder newspaper chain for a trial of the technology in South Florida (Sterling, 2006).

Ultimately, teletext failed to gain a market and quickly vanished after the middle of the decade. Chris Sterling, a historian of communications technology, argued that the failure was initiated when the US government declined to set a single teletext standard for the country. “The result was a gridlock of rapidly declining interest,” he explained. No “sector – manufacturers, content providers, or television broadcasters – was willing (or, under American antitrust laws, able) to take the lead in coordinating choice of a national standard” (Sterling, 2006).

Digital Layout and Digital Distribution

Behind the failure to establish a standard for teletext lay a number of other problems – technical, social, and political – that no single newspaper or chain of papers was prepared to solve. Sterling (2006) identifies the traditional US rejection for government control of communication technology, the conservative strategies of the Reagan administration, and the recession that slowed the US economy in the early years of the decade. Furthermore, no US newspaper was really prepared to deal with the full consequences of electronic distribution.

If teletext had actually forced a decline in newspaper subscriptions and attracted new readers, it would have ended the primacy of the newspaper press in the media. No longer bound by that moment when the press began operation and words turned into ink, publishers would have been forced to conceive their firms as continuously gathering and producing information, a step that had yet to be taken by television news, an institution that had technology much more amenable to continuous production. Had publishers been able to deal with this issue at this time, they might have been able to make a gradual and informed transition into a new kind of news organization. When the technologies of electronic distribution matured a decade later, it was combined with the tools for electronic layout. Together, the two destroyed the unity of the newspaper.

The technology of electronic layout came first. It developed out of the idea of software objects, the idea that each of the elements of the paper – the stories, the pictures, the advertisements – could be conceived as little digital collections that contained both data and instructions. This concept became more important because of the laser printer, a device built out of the technology of the electrostatic copier. Electrostatic copiers offered the promise of a flexible printing device that would eliminate the need for the physical layout of a newspaper page. That flexibility came from the technology within the copier. These machines used a wide plastic belt to duplicate documents. The image of the document was reflected onto the belt by a bright light, a light that would destroy part of the charge and leave an electrical imprint of the original item. The machine then moved the belt through carbon dust (or toner) which would stick to the charged parts of the belt. Finally, the copier pressed the belt against a blank sheet of paper and fused the carbon to the paper with heat.

The laser printer used the same technology, except that it replaced the reflected image of a document with a laser beam that was controlled by a micro computer. That computer would take strings of text, much like the phototypesetters read paper tapes of text, turn that text into instructions for the laser, and then guide the laser to create the text on the electrostatic belt, which would eventually be transferred to paper.

This new technology of the laser printer quickly proved to be impracticable if it was used in the obvious, naïve way, the way in which the MIT researchers connected the Photon typesetter to the IBM 709 two decades before. If a central computer would transmit a document by beginning at the upper left-hand corner and ending at the lower right, the new printers simply could not handle the flow of data and ceased to function. The solution came in the form of a new language, called PostScript, that was developed by John Warnock and Charles Geschke, who were researchers at the Xerox Palo Alto Research Center in California. The two of them adopted the idea of the programming object, the self-program that would incorporate both data and instructions, both text and the codes that would explain how the letters and words would be presented (Perry, 1988).

Warnock, who did most of the original development of PostScript, followed the pattern that had become well established in computer science. He designed his solution as an intermediate and ephemeral step of translation, as a technology that took edited text, temporarily converted it into the new language called PostScript, and transmitted it to a printer, which transformed the PostScript code into a printed document. However, in an early demonstration of this language, he was forced by circumstances to retain PostScript code and feed it to a printer at a time when the main computer was not able to operate. In the process, he recognized that by storing the PostScript code in file, he had created a translation of the document that could be processed by any computer that had his translation software. He named this version the portable document file or pdf (Grier, 2009).

PostScript, released in 1984, and pdf files, released seven years later, became key elements of an activity known as desktop publishing.1 Though initial applications of desktop publishing concentrated on small-run publications that could be printed on laser printers or high-speed copy machines, it did nothing to displace the role of the printing press. Indeed the technology was quickly adopted by business firms for commercial purposes (Janes & Snyder, 1986). Instead, it added a key step to the electronic chain of production that began with assembling notes and ideas, moved through the writing and editing of an article, and ended with typesetting, page layout, and the printing of the final document.

Desktop publishing did not address electronic distribution, the final step that had yet to be addressed by electronic technology. This step was connected, of course, to the rise of electronic networks and involved one final step of translation. Digital works, of course, had a long history in computing technology. In 1940, Bell Labs built an electric calculator in New York City that could be operated at a distance using a special keyboard and standard telephone lines. It was demonstrated in September of that year to a meeting of the Mathematical Association of America in Hanover, New Hampshire (Andrews, 1982). In the 1950s, the US Air Force built a computer system, called SAGE, that used telephone lines to gather radar data (Ceruzzi, 1998, p. 51).

None of these early systems, nor much of the subsequent technology, provided a workable means for distributing documents electronically. Part of the reason for this, like the reason for the failure of Teletext, was the lack of a common standard for network technology. However, an equally pressing problem was the lack of a system that would translate documents into a form that made best use of the network technology.

Standard network technology came from work done for the Advanced Research Projects Agency of the US Department of Defense (Abbate, 1999). This work, which was first demonstrated in 1969, developed the ideas of packet switching and the Internet Protocol, ideas which formed the basis for the modern Internet. The technology that would map documents into the technology of networks came from the European Research Lab CERN in 1990 and borrowed heavily from Vannevar Bush's Memex machine and the information systems of the 1950s.

“We should work toward a universal linked information system,” wrote CERN scientist Tim Berners-Lee in the spring of 1990. “The aim would be to allow a place to be found for any information or reference which one felt was important, and a way of finding it afterwards.” Rather than attempting to build this information system as a single machine, he designed it to be a computer network because the ideas themselves were linked like a network (Berners-Lee, 1989).

In designing this system, which became known as the World Wide Web, Berners-Lee faced one more translation problem. He would translate documents into a new language, based on the General Markup Language, that would identify the document, describe how it should be displayed, and link that document with related documents in other parts of the system. A program called a server would respond to requests by sending the appropriate document across the network. One final piece of software, called the browser, would translate the document into a form that would be properly displayed on the final computer (Berners-Lee & Connolly, 1993).

Berners-Lee did not necessarily see his creation as a general tool that could be used for commercial news as well as research documents. The debates about this technology “have sometimes tackled the problem of copyright enforcement and data security,” he wrote. “These are of secondary importance at CERN, where information exchange is still more important than secrecy” (Berners-Lee, 1989). Still, his ideas quickly gained an audience beyond CERN and soon displaced competing technologies that were unable to translate documents into forms that used networks efficiently (Frana, 2004; Khare, 1999; Obraczka, Danzig, & Li, 1993).

The Consequences of Completing the Chain

Unlike the desktop publishing technology, the tools of the World Wide Web were profoundly unsettling to newspaper publishers. “What's the difference between 1990, when newspapers seemed on the verge of getting their electronic act together,” asked one columnist for Editor and Publisher, “and 1995, when the world seems to he skittering ahead with or without them?” (Conniff, 1995). He claimed that the answer was simple: the World Wide Web. The new technology did not easily protect copyrights. It allowed competition to enter the market with minimal capital investment. It did not support a simple subscription service.

Perhaps most fundamentally, the software systems that managed the production of news, from the reporter's notes to the completed web document, undermined the unity of the newspaper. The same software that bundled the program and data that presented the news allowed the different elements of the paper to break away from the common core and seek their fortune as independent entities.

For most of its history, the newspaper had presented itself as a unified reflection of the community it served. It would have something for everyone even though few would read the entire paper. “Today's press is a composite, omnibus vehicle carrying a variety of loads,” observed one editor at the start of the computer era (Poynter, 1942). Some of those loads were more important to the journal than others. The critic Marshall McLuhan noted that the “classified ads and stock-market quotations are the bedrock of the press. Should an alternative source of easy access to such diverse daily information be found, the press will fold” (McLuhan, 1994, p. 207).

To classified ads and stock reports could be added sports news and weather reports. These elements of the newspapers attracted paying readers. They were also elements that could be isolated from the old omnibus paper and published individually provided that the cost of publication was sufficiently low. “This unbundled deployment of the multiple media that make up the guts of a newspaper is a profound development,” argued Editor and Publisher. “For the first time, using the Web/Mosaic combination, newspapers have the chance to make universally available all the variegated pieces that go into the making of information.” By the summer of 1995, at least one individual, Craig Newmark, recognized this possibility and was offering a classified ads service on the Web that he called Craigslist.

Newspapers responded to the new technology in a variety of ways. Some embraced it uncritically. Some utilized it in part. Some exerted great effort to maintain the unifying power of the printing press. Perhaps the Washington Post invested the greatest amount of resources in its attempt to retain control over its publication process. It decided to shun the World Wide Web technology and instead purchased a different kind of technology, called Interchange, from AT&T. Interchange was designed to be a point of entry to the Internet. Subscribers would connect to the service using a traditional phone line. At the Interchange, they would have complete access to the Internet as well as to the news of the Post. They called this product Digital Ink (Webb, 1995). The new service attempted to retain the unified communal presence of the newspaper. “Digital Ink aims to be the ‘one-stop place to come for information,’” explained Editor and Publisher. “The service seems well positioned to take advantage of its promotional and advertising potential and integrates advertising with editorial, just like a newspaper” (Webb, 1995).

Digital Ink did not survive a full year of operation. Once readers had access to the World Wide Web, they had little interest in getting all of their information from a single source. They looked for services that provided more information about a narrower subject: Weather reports from a source that specialized in global weather, classified ads from an institution that offered goods from anywhere in North America, sports stories from an organization that tracked every player on every team.

Digital Ink accepted the first customers in July 1995 and disconnected the last of them by June the following year. Over the intervening 12 months, no more than 11,000 individuals accessed the paper. In the first days of a new service, Washingtonpost.com, which was built with World Wide Web technology, a million people looked at the new site. “It's easy to condemn it and we all do. But everyone else made a lot of the same mistakes,” explained one of the managers of Digital Ink. “The Web happened. And messed up a lot of people” (Mark Potts, quoted in Jenkins, 1996).

The Transformed Newspaper

In his study The Digital Hand, historian Jim Cortada (2006) argues that digital technology “came initially to newspapers for the same reason as to so many other industries: to drive down the costs of labor and thereby improve productivity” (p. 315). His approach helps us understand the pattern the US industry followed when it adopted computer production technology and the development of that technology itself, but it does not quite illustrate the role of the production process within newspapers, nor does it explain the central role of the printing press in the dissemination of newspapers.

In 1995, just as the Washington Post and other papers were developing their web pages, the head of the MIT media laboratory, Nicholas Negroponte, argued that the “medium is not the message in a digital world.” He dismissed the famous phrase of Marshall McLuhan by arguing that messages were now freely translatable from one format to another, that a “message might have several embodiments automatically derivable from the same data” and hence are not trapped in a single media (Negroponte, 1995, p. 71).

Yet McLuhan's point remains valid without undermining the claim of Negroponte. The kind of printing press that one needed to publish a daily paper influenced the nature of the production process and, ultimately, the messages carried by the product. A press is an expensive capital good that needs to be kept in operation and yet be connected to the daily cycle of life in a community. The “dateline is the only organizing principle of the newspaper image of the community,” McLuhan argued. “Take off the dateline and one day's paper is the same as the next” (McLuhan, 1994, p. 212).

While the printing press was central to the production of a daily paper, it was not replaced in a moment. Indeed, newspaper publishers introduced technology to improve the process rather than make a major change. Digital systems appeared, step by step, as technologies of translation, programs that would take a message from one form and change it into another. Beginning with telegraphic typesetting, these systems translated text from one form to another and, step by step, it built a chain of production that prepared and developed the news.

In 1996, following the demise of Digital Ink, the newspaper technology began to move beyond the printing press. That summer, the trade publication Editor and Publisher surveyed the nation's newspapers and concluded that well over 90% of them had established a presence on the World Wide Web (Jenkins, 1996). Unlike previous efforts to introduce digital technology into the production of news, this last step was not driven by a desire to reduce costs but rather by concerns to establish a place in the growing market that was supported by the networks. “Most online newspapers have yet to turn a profit,” explained a US government report that year, “but they remain committed to the Internet” (Margherio, 1998, pp. A4–5).

“While there is some concern over the acceptability of the newspaper in electronic versus paper form, the electronic form appears to be inevitable,” argued researchers who were studying the methods of disseminating printing in the middle of the 1990s. “However, the newspaper metaphor is likely to be only the initial interface as the paradigm continues to evolve” (Burkowski, Shepherd, & Watters, 1994).

The metaphor of the newspaper, the idea of a news organization that was rooted in the community, centered on an expensive printing press, and driven by the daily cycles of life has been surprisingly slow to evolve. Long after computer software collapsed the news cycle into instantaneous postings and stretched their community to embrace the globe, they still clung to policies that were developed for an organization that evolved into new forms. Most news organizations require reporters to use the byline of their physical location, even though that location may have nothing to do with the story that they are writing.

One news organization required a reporter to spend the night of January 19, 2009 in a rented apartment so that he could use a Washington, DC byline for his story on the inauguration of Barack Obama. He had no way to see the inauguration except on television, no way to gather information except by telephone and email, no way to file except over the Internet. His relationship to the processes that created the news and those that produced the paper were unaffected by his physical location. This news organization had completely abandoned its newspaper press and cast its lot with digital publication. Its organization was no longer driven by deadlines imposed by a physical press but by a system that gathered information from around the globe, translated it from one form to another, combined its stories with advertising, and distributed the final product to all who wanted it. This system was built upon four decades of research into the best methods of manipulating text and graphics. Yet, this system had failed to transform the basic assumption that reporters can only write about the countries, the cities, the neighborhoods in which they actually reside (Grier, 2009).

NOTE

1 Desktop publishing is usually dated to a string of articles that appeared in business magazines in the spring of 1986. For example, PC World ran a series of articles on the subject that spring. See Desktop publishing (1986).

REFERENCES

Abbate, J. (1999). Inventing the Internet. Cambridge, MA: MIT Press.

Allen, C. (1942). The press and advertising. Annals of the American Academy of Political and Social Sciences, 219, 86–92.

Andrews, E. G. (1982). Telephone switching and the early Bell Laboratories computers. Annals of the History of Computing, 4(1), 13–19.

Backus, J. (1979). The history of Fortran I, II, and III. Annals of the History of Computing, 1(1), 21–37.

Barnett, M. P., Moss, D. J., Luce, D. A., & Kelley, K. L. (1963). Computer controlled printing. Proceedings – Joint Spring Computer Conference, 1963 (pp. 263–287). AFIPS.

Bergin, T. (2006). The proliferation and consolidation of word processing software 1976–1985. Annals of the History of Computing, 28(4), 32–47.

Berners-Lee, T. (1989). Information management: A proposal. Retrieved from http://www.w3.org/History/1989/proposal.html

Berners-Lee, T., & Connolly, D. (1993, June). Hypertext markup language. Retrieved from http://tools.ietf.org/html/draft-ietf-iiir-html-00

Burkowski, F. J., Shepherd, M. A., & Watters, C. R. (1994). Delivery of electronic news: A broadband application. CASCON '94: Proceedings of the 1994 Conference on the Center for Advanced Studies on Collaborative Research.

Bush. V. (1931). The differential analyzer, a new machine for solving differential equations. Journal of the Franklin Institute, 212, 447–488.

Bush, V. (1945). As we may think. Atlantic Magazine, 176, 101–108.

Campbell-Kelly, M. (2003). From airline reservations to Sonic the Hedgehog: A history of the software industry. Cambridge, MA: MIT Press.

Campbell-Kelly, M., & Williams, M. (Eds.). (1985). The Moore School lectures: Theory and techniques for design of electronic digital computers. Cambridge, MA: MIT Press.

Ceruzzi, P. E. (1998). A history of modern computing. Cambridge, MA: MIT Press.

Ceruzzi, P. E., & ebrary, Inc. (2003). A history of modern computing (2nd ed.). Cambridge, MA: MIT Press.

Chalmers, A. (1810). The works of the English poets from Chaucer to Cowper. London, UK: J. Johnson et al.

Chomsky, N. (1956). Three models for the description of language. IRE Transactions on Information Theory, 2, 113–124.

Conniff, M. (1995). A tangled web for newspapers. Editor and Publisher, 128(5). Retrieved from http://web.ebscohost.com.proxygw.wrlc.org/ehost/detail?vid=5&hid=13&sid=4336955f-bd6e-4673-8b54-f49bd1cd3330%40sessionmgr4&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=buh&AN=9502172770

Cortada, J. (2006). The digital hand (vol. 2). Oxford, UK: Oxford University Press.

Desktop publishing. (1986). PC World, 4(7), 170.

Deutsch, P., & Lampson, B. (1967). An online editor. Communications of the ACM, 10(12), 793–800.

Farber, D. J., Griswold, R. E., & Polonsky, I. P. (1964). SNOBOL, A string manipulation language. Journal of the ACM, 11(2), 21–30.

Frana, P. (2004). Before the Web there was Gopher. Annals of the History of Computing, 26(1), 20–41.

Garnett, B. (1942). Changes in the basic newspaper pattern. Annals of the American Academy of Political and Social Science, 219, 53–59.

Goldfarb, C., Mosher, E., & Peterson, T. (1970). An online system for integrated text processing. ASIS Proceedings (vol. 7). Presented at the American Society for Information Science, Philadelphia, PA. Retrieved from http://www.sgmlsource.com/history/jasis.htm

Greibach, S. A. (1981). Formal languages: Origins and directions. Annals of the History of Computing, 3(1), 14–41.

Grier, D. A. (2009, June). Interview with John Warnock and Charles Geschke.

Griss, W. S. (1979). Info-text, newspaper of the future. IEEE Transactions on Consumer Electronics, CE-25(3), 295–297.

Harrison, H. H. (1940). Telegraphic typesetting. Journal of the Institution of Electrical Engineers, 87(526), 401–423.

Holmevik, J. R. (1994). Compiling Simula: A historical study of technological genesis. Annals of the History of Computing, 16(4), 25–37.

Janes, J., & Snyder, M. (1986). Be your own publisher. SIGDOC '86. Presented at the 5th Annual Conference on Systems Documentation.

Jenkins, M. (1996). The Washington Post recovers from a digital false start. Brandweek, 37(39), N14.

Kaiman, A. (1968). Computer-aided publications editor. IEEE Transactions on Engineering Writing and Speech, EWS-11(2), 65–75.

Kay, A. (1993). The early history of small talk, Retrieved from http://www.smalltalk.org/downloads/papers/SmalltalkHistoryHOPL.pdf

Khare, R. (1999). Who killed Gopher. Internet Computing, 3(1), 81–84.

Kihss, P. (1975, January 24). The Times plans satellite printing plant at a cost of $35 million in New Jersey. New York Times, p. 39.

Madnick, S., & Moulton, A. (1968). Script, an on-line manuscript processing system. IEEE Transactions on Engineering Writing and Speech, EWS-11(2), 92–100.

Marchant, W. (1955). Desk Set. New York, NY: Samuel French.

Marcus, M., & Trimble, G. (2006). Taking newspapers from hot lead into the electronic age. Annals of the History of Computing, 28(4), 96–100.

Margherio, L. (1998). The emerging digital economy. Washington, DC: US Department of Commerce, Economics and Statistics Administration, Retrieved from http://www.esa.doc.gov/reports.cfm

Mathews, M. V., & Miller, J. (1965). Computer editing, typesetting and image generation. Proceedings of the Fall Joint Computer Conference 1965 (pp. 389–398). Presented at the Fall Joint Computer Conference 1965, AFIPS.

McLuhan, M. (1994). Understanding media. Cambridge, MA: MIT Press.

Negroponte, N. (1995). Being digital. New York, NY: Knopf.

Neiva, E. M. (1996). Chain building: The consolidation of the American newspaper industry, 1953–1980. Business History Review, 70(1), 1–42.

Neumann, J. von (1945). First draft of a report on the EDVAC. Retrieved from http://qss.stanford.edu/~godfrey/vonNeumann/vnedvac.pdf

News Union Warns Members Working at Washington Post. (1975, October 10). New York Times, p. 9.

Obraczka, K., Danzig, P. B., & Li, S.-H. (1993). Internet resource discovery services. Computer, 26(9), 8–22.

Perry, J. W., Berry, M. M., Luehrs, F. U., & Kent, A. (1954). Automation of information retrieval. AIEE-IRE '54 (Eastern).

Perry, T. (1988). Postscript prints anything: A case history. IEEE Spectrum, 25(5), 42–46.

Poynter, N. (1942). The economic problems of the press and the changing newspaper. Annals of the American Academy of Political and Society Science, 219, 82–85.

Raskin, A. H. (1974, April 16). Newspaper talks: Clash of compulsions. New York Times, p. 81.

Robb, A. (1942). The ideal newspaper of the future. Annals of the American Academy of Political and Social Sciences, 219, 169–175.

Salton, G. (1966). Information dissemination and automatic information systems. Proceedings of the IEEE, 54(12), 1663–1678.

Simpson, R., Mylonas, E., & van Dam, A. (1996). 50 years after “As we may think”: The Brown/MIT Vannevar Bush symposium. Interactions, 3(2), 47–67.

Snyder, E. E. (1995). A copyboy in love with newspapers. Oregon Historical Quarterly, 96(2/3), 226–241.

Special to the New York Times. (1944, August 7). Algebra machine spurs research calling for long calculations. New York Times, p. 17.

Sterling, C. (2006). Pioneering risk: Lessons from the US teletext/videotext failure. Annals of the History of Computing, 28(3), 41–47.

Teletext threat to newspapers. (1980). Electronics and Power, 26(1).

Wallace, M., & Kalleberg, A. (1982). Industrial transformation and the decline of craft: The decomposition of skill in the printing industry. American Sociological Review, 47(3), 307–324.

Webb, W. (1995). Washington Post debuts its Digital Ink online service. Editor and Publisher, 128(30), 25.

Ziegler, J. C. (1969). Text/360 from a user's point of view. IEEE Transactions on Engineering Writing and Speech, EWS-12(2), 33–35.

Zuse, K. (1982). Method for automatic execution of calculations with aid of computers (1936). In B. Randell (Ed.), The origins of digital computers (3rd ed., pp. 163–169). Berlin, Germany: Springer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.38.99