Appendix E. Historical Perspective

Winston Churchill once said that the farther backward you can look, the farther forward you are likely to see. This can apply to any field of endeavor and implies that the vision ahead of us is sometimes brought into clearer focus by first looking back over from where we came. This appendix presents an historical perspective of information technology (IT) during its infancy in the middle of the 20th century. My intent here is to present some of the key attributes which characterized IT environments of that time and to show how these attributes contributed to the early development of systems management.

I begin by describing a timeline from the late 1940s through the late 1960s, highlighting key events in the development of IT and its inherent infrastructures of the time.

I then look at some of the key requirements of an ideal IT environment that were lacking during that era. These deficiencies led to the next major breakthrough in IT—the development of the IBM System/360.

Timelining Early Developments of Systems Management

Historians generally disagree as to when the modern IT era began because IT is an evolving entity. Inventors and scientists have experimented with devices for counting and tabulating since before the early use of the abacus.

In 1837, English mathematician Charles Babbage conceived of an Analytical Engine, which could store programs to compute mathematical functions. While lack of funding prevented him from developing it much past an initial prototype, it amazingly contained many of the concepts of early digital computers that would take nearly a century to develop.

George Boole developed Boolean Algebra in 1854. Simply described, it is the mathematics of counting on two fingers which led scientists to consider two-state mathematical machines utilizing devices that were either on or off. A few years later, advances were made in which electrical impulses were first used as a means to count and calculate numbers. A minor breakthrough occurred in the 1870s when mathematician/scientist Eric Von Neuman invented a reliable electrical/mechanical device to perform mathematical operations. His apparatus was simply called the Neuman Counting Machine. In 1889, Herman Hollerith invented punched tape, and later punched cards, to electronically store binary information.

The first half of the 20th century saw an era of refinements, enhancements, and further breakthroughs in the field of electrical/mechanical accounting machines. In 1939, English engineer Alan Turing developed a two-state machine that could perform operations read from a punched tape. His so-called Turing Machine was later used to decipher coded messages for Great Britain during World War II. In 1945, mathematician John von Neumann conceived the notion that electronic memory could be used to store both the data to be processed and the instructions on how to program it.

By 1946, the field of accounting and the emerging electronics industry combined forces to develop the first truly electronic calculating machine. The term computer had not yet come into existence. Instead, it was known by its cumbersome acronym, ENIAC, which stood for Electronic Numerical Integrated Accounting Calculator. Dr. John W. Mauchly and J. P. Eckert, Jr. of the University of Pennsylvania headed up a large team of engineers to develop ENIAC for complex ballistics analysis for the U. S. Army Air Corps.

Real Life Experience—Playing Tic-Tac-Toe Against Electrons

When I was in elementary school in the mid-1960s, a classmate and I won our school’s annual science fair with an exhibit on how an electric motor works. Our win enabled us to enter a major regional science fair for junior and high school students. While our relatively simple project warranted us an honorable mention at the regional fair, what impressed me most was the electronic Tic-Tac-Toe machine that won 1st place overall.

The machine consisted of dozens of electro-mechanical relays with hundreds of wires going out one side to a large display screen configured with three rows of three squares; out of the other side, wires connected to a keypad with the familiar nine squares from which players could enter their Xs. After a player entered his selection, the relays made loud clicking sounds as they turned on and off for about 10 seconds before showing the machine’s response on the display. Of course, the machine never lost. The exhibiter called his project a Computerized Tic-Tac-Toe Game and it drew huge crowds of onlookers. Years later, as I read about IBM’ s Deep Blue super computer winning against world chess champions, I think back to those noisy, slow-acting relays and marvel at how far we have come.

By today’s standards, ENIAC would clearly be considered primitive. The electronics consisted mostly of large vacuum tubes that generated vast amounts of heat and consumed huge quantities of floor space (see Figure E-1). The first version of ENIAC filled an entire room of 20 square feet (6.1 square meters). Nonetheless, it was a significant advance in the infant industry of electronic computing. Remington Rand helped sponsor major refinements to this prototype computer in 1950. The new version, called ENIAC II, may have started a long standing tradition in IT of distinguishing hardware and software upgrades by incremental numbering.

Figure E-1. U.S. Army Photo of ENIAC

image

In 1952, Remington Rand introduced a totally new model of electronic computer. Many of the vacuum tubes were now replaced with a revolutionary new electronic device developed by Bell Laboratories: the transistor. In addition to saving space and reducing heat output, transistors had much faster switching times, which translated into more computing power in terms of cycles per second. This increase in computing power meant that larger programs could process greater quantities of data in shorter amounts of time. The new transistorized model of this computer was named UNIVAC, short for universal accounting calculator.

UNIVAC made a splashy debut in November 1952 by becoming the first electronic calculating machine used in a presidential election. Its use was not without controversy. Many were skeptical of the machine’s reliability; delays and occasional breakdowns did little to improve its reputation. Understandably, many Democrats questioned the accuracy of the results, which showed Republican candidate Dwight D. Eisenhower defeating Democratic challenger Adlai J. Stevenson.

Out of these humble and sometimes volatile beginnings grew an infant IT industry that, in the span of just a few short decades, would become one of the world’s most important business forces. Also out of this came the beginnings of systems management. The experiences of UNIVAC at the 1952 presidential election presented computer manufacturers with some harsh realities and provided them with some important lessons. Marketing groups and technical teams within UNIVAC both realized that, for their product to succeed, it must be perceived by businesses and the public in general as being reliable, accurate, and responsive.

The message was not lost on competing companies. International Business Machines (IBM), Control Data Corporation (CDC), and Digital Equipment Corporation (DEC), among others, also saw the need to augment the engineering breakthroughs of their machines with sound availability design. These manufacturers knew only too well that nothing would undermine their technology more quickly than long and frequent outages, slow performance, and marginal throughput. This emphasis on availability, performance and tuning, and batch throughput planted the seeds of systems management. As a result, suppliers began providing redundant circuits, backup power supplies, and larger quantities of main memory to improve the performance and reliability of their mainframes.

The term mainframe had a practical origin. Most of the circuitry and cabling for early computers were housed in cabinets constructed with metal frames. Control units for peripheral devices (such as card readers, printers, tape drives, and disk storage) were also housed in metal frames. In order to distinguish the various peripheral cabinets from those of the main, or central processing unit (CPU), the frame containing the CPU, main memory, and main input/output (I/O) channels was referred to as the mainframe. The term has since come to refer to any large-scale computing complex in which the CPU has computing power and capabilities far in excess of server and desktop computers.

By 1960, mainframe computers were becoming more prevalent and more specialized. Demand was slowly increasing in American corporations, in major universities, and within the federal government. Companies employed business-oriented computers in their accounting departments for applications such as accounts receivables and payables. Universities used scientifically oriented computers for a variety of technical applications, such as analyzing or solving complex engineering problems. Numerically intensive programs came to be known as number crunchers.

Several departments within the federal government employed both scientific and business-oriented computers. The Census Bureau began using computers for the 1960 national census. These computers initially assisted workers in tabulating and statistically analyzing the vast amounts of data collected from all over the country. With the population booming, there was far more data to acquire and organize than ever before. While these early versions of computers were slow and cumbersome by today’s standards, they were well-suited for storing and handling these larger amounts of population demographic information.

The 1960 presidential election was another event that gave added visibility to the use of computers. By this time, business computers were being used and accepted as a reliable method of counting votes. The Department of Defense was starting to use scientific computers for advanced research projects and high technology applications. The National Aeronautics and Space Administration (NASA), in concert with many of its prime contractors in private industry, was making extensive use of specialized scientific computers to design and launch the first U.S.-manned spacecraft as part of Project Mercury.

The Need for a General-Purpose Computer

This increased use of computers exposed a deficiency of the systems that were available at the time. As the number and types of applications grew, so also did the specialization of the machines on which these programs ran. Computers designed to run primarily business-oriented systems (such as payrolls and account ledgers, for example) typically ran only those kinds of programs. On the other hand, organizations running scientific applications generally used computers specifically designed for these more technical programs. The operating systems, programming languages, and applications associated with business-oriented computers differed greatly from those associated with scientifically oriented machines.

This dichotomy created problems of economics and convenience. Most organizations—whether industrial, governmental, or academic—using computers in this era were large in nature and their business and scientific computing needs began overlapping. Many of these firms found that it became expensive, if not prohibitive, to purchase a variety of different computers to meet their various needs.

A separate dilemma arose for firms that had only one category of programs to run: How to accommodate programs of varying size and resource requirements quickly and simply on the same computer. For example, the operating system (OS) of many computers designed only for business applications would need to be reprogrammed, and in some cases rewired, depending on the processing, memory, I/O, and storage requirements of a particular job.

Constant altering of the operating environment to meet the changing needs of applications presented many drawbacks. The work of reprogramming operating systems was highly specialized and contributed to increased labor costs. The changes were manually performed and were therefore prone to human error, which adversely impacted availability. Even when completed flawlessly, the changes were time-consuming and thus reduced batch throughput and turnaround. Systems management would be better served if error-prone, time-consuming, and labor-intensive changes to the operating environment could be minimized.

As the need to manage the entire operating environment as a system became apparent, the requirement for a more general-purpose computer began to emerge. Exactly how this was accomplished is the subject of the next section.

A Look at the Early Development of IBM

While this book does not focus on any one hardware or software supplier, when one particular supplier has played a key role in the development of systems management, it is worth discussing. A book on desktop operating systems would no doubt trace some of the history of Microsoft Corporation. A book on integrated computer chips would likely discuss Intel Corporation. Here we will take a brief look at IBM.

In 1914, a young salesman named Thomas Watson left his secure job at the National Cash Register (NCR) Corporation in Cincinnati, Ohio, to form his own company. Watson called his new company the Calculating, Tabulating, and Recording (CTR) Company, which specialized in calculating devices, time clocks, meat scales, and other measuring equipment.

Within a few years, his company extended its line of products to include mechanical office equipment such as typewriters and more modern adding machines. Watson was a man of vision who saw his company expanding in size and scope around the world with a whole host of advanced office products. In the early 1920s, he renamed his company the International Business Machines (IBM) Corporation. (While IBM was not, strictly speaking, international when it was renamed—it had no facilities outside the United States—Watson envisioned it quickly becoming a global empire and was in the process of opening a small branch office in Toronto, Canada.)

IBM grew consistently and significantly throughout Watson’s tenure as chairman and chief executive officer (CEO) during the first half of the 20th century. The growth of IBM came not only in terms of sales and market share but in its reputation for quality products, good customer service, and cooperative relationships between management and labor. In 1956, Thomas Watson, Sr. turned over the reins of his company to his son, Thomas Watson, Jr.

The Junior Watson had been groomed for the top spot at IBM for some time, having worked in key management positions for a number of years. He had the same drive as his father to make IBM a premier worldwide supplier of high-quality office products. Watson Jr. built and expanded on many of his father’s principles.

For example, he realized that, for the company to continue gaining market share, customer service needed to be one of its highest priorities. Coupled with, and in support of, that goal was the requirement for effective marketing. As to quality, Watson not only demanded it in his company’s products, he insisted it be an integral part of the extensive research into leading-edge technologies as well. Certainly not least among corporate values was Watson’s insistence of respect for every individual who worked for IBM. These goals and values served IBM well for many years.

During its halcyon years in the 1970s and early 1980s, IBM’s name became synonymous with highly reliable data-processing products and services. Perhaps the clearest example of IBM’s reputation around that time was a saying heard often within the IT industry: “No CIO ever got fired for buying IBM products.” Of course, things at IBM would change dramatically in the late 1980s, but prior to that it was arguably the industry leader within IT.

By building on his father’s foundation, Watson Jr. is properly credited with turning IBM into such an industry leader. However, his vision for the long-range future of IBM did differ from that of his father’s. The senior Watson was relatively conservative, manufacturing and marketing products for markets which already existed. Watson Jr., however, was also intent on exploring new avenues and markets. The new frontier of computers fascinated him, and he saw these machines as the next potential breakthrough in office products.

Prior to becoming IBM’s CEO, Watson Jr. had already started slowly moving the company in new directions. More budget was provided for research and development into computer technology and new products were introduced in support of this new technology. Keypunch machines, card collators, and card sorters were among IBM’s early entries into this emerging world of computers.

During the late 1950s, Watson Jr. accelerated his company’s efforts at tapping into the markets of computer technology. He also wanted to advance the technology itself with increased funding for research and development. The research covered a broad range of areas within computer technology, including advanced hardware, operating systems, and programming languages.

One of the most successful efforts at developing a computer-programming language occurred at IBM in 1957. A team led by IBM manager Jim Backus unveiled a new scientific programming language called FORTRAN, for FORmula TRANslator. Specifically designed to run higher-level mathematics programs, it was ideally suited for solving complex analysis problems in engineering, chemistry, physics, and biology. Over time, FORTRAN became one of the most widely used scientific programming languages in the world.

Two years after FORTRAN became available, a business-oriented programming language was introduced. It was called COBOL for COmmon Business Oriented Language. While IBM did not solely lead the effort to develop COBOL it did support and participate in the COmmittee on DAta SYstems Languages (CODASYL), which sponsored its development. As popular as FORTRAN became for the scientific and engineering communities, COBOL became even more popular for business applications, eventually becoming the de facto business programming language standard for almost three decades.

By the early 1960s, IBM had introduced several successful models of digital computers for use by business, government, and academia. An example of this was the model 1401 business computer, a very popular model used by accounting departments in many major American companies. Despite its popularity and widespread use, the 1401 line, along with most other computers of that era, suffered from specialization. Suppose you had just run an accounts receivable application on your 1401 and now wanted to run an accounts payable program. Since different application software routines would be used by the new program, some of the circuitry of the computer literally would need to be re-wired in order to run the application. Due to the specialized nature of the 1401, only certain types of business applications could run on it, and only one could be run at a time.

The specialized nature of computers in the early 1960s was not confined to applications and programming languages. The types of I/O devices which could be attached to and operated on a computer depended on its specific architecture. Applications which required small amounts of high-speed data typically would not run on computers with devices storing large amounts of data at slow speeds.

To overcome the drawbacks of specialization, a new type of computer would need to be designed from the ground up. As early as 1959, planners at IBM started thinking about building a machine that would be far less cumbersome to operate. Shortly thereafter, Watson Jr. approved funding for a radically new type of IBM computer system to be called the System/360. It would prove to be one of the most significant business decisions in IBM’s history.

The Significance of the IBM System/360

The overriding design objective of the System/360 (S/360) was to make it a general-purpose computer: business-oriented, scientific, and everything in between. (The 360 designation was chosen to represent all degrees of a compass.) This objective brought with it a whole host of design challenges, not the least of which was the development of an entirely new operating system.

This new operating system, dubbed OS/360, proved to be one of the costliest and most difficult challenges of the entire project. At one point, more than 2,000 programmers were working day and night on OS/360. Features such as multi-programming, multi-tasking, and independent channel programs were groundbreaking characteristics never before attempted at such a sophisticated level. Some of the more prominent features of the S/360, and a brief explanation of each, are shown in Table E-1.

Table E-1. Features of the IBM System/360

image

Two other design criteria for OS/360 which complicated its development were upward and downward compatibility. Downward compatibility meant that programs which were originally coded to run on some of IBM’s older models (for example, the 1401) would be able to run on the new S/360 computer under OS/360. Upward compatibility meant that programs coded to run on the first models of S/360 and under the first versions of OS/360 would also run on future models of the S/360 hardware and the OS/360 software. In other words, the architecture of the entire system would remain compatible and consistent.

The challenges and difficulties of developing an operating system as mammoth and complex as OS/360 are well documented in IT literature. One of the more interesting works is The Mythical Man Month by Frederick Brooks. The thrust of the book is effective project management, but it also discusses some of the many lessons learned from tackling a project as huge as S/360.

By 1963, the S/360 project was grossly over budget and far behind schedule. Watson’s accountants and lawyers were understandably concerned about the escalating costs of an effort that had yet to produce a single product or generate a single dollar of revenue. The total cost of developing the S/360 (US $5B—or $30B 2005 dollars) was estimated to exceed the total net worth of the IBM Corporation. Watson was literally betting his entire company on what many felt was a questionable venture.

Moreover, there were many in the industry, both within and outside of IBM, who questioned whether the S/360 would really fly. Even if the many hardware problems and software bugs could be solved, would conservative companies really be willing to pay the large price tag for an unproven and radically different technology?

The answer to these concerns came on April 4, 1964, when IBM announced general availability of the S/360 (see Figure E-2). The system was an immediate and overwhelming success. Orders for the revolutionary new computing system were so intense that the company could barely keep up with demand. In less than a year, IBM had more than made back its original investment in the system and upgrades with even more advanced features were being planned.

Figure E-2. Operator Console of IBM System/360 (Reprint Courtesy of International Business Machines Corporation, copyright International Business Machines Corporation)

image

Why was the S/360 so successful? What factors contributed to its unprecedented demand? The answers to these questions are as numerous and varied as the features which characterized this system. S/360 was powerful, high-performing, and reliable. Perhaps most significant, it was truly a general-purpose computing system. Accountants could run COBOL-written business programs at the same time engineers were running FORTRAN-written scientific programs.

In addition to its general-purpose design, the S/360’s independent channels with standard interfaces allowed a variety of high-speed I/O devices to operate concurrently with data being processed. This design architecture resulted in greatly improved operational productivity with increased throughput and reduced turnaround times.

Some observers even felt there were socio-political elements adding to the popularity of the S/360. During the early 1960s, the United States was well on its way to meeting President John Kennedy’s commitment to land a man on the moon and safely return him to earth by the end of the decade. Most of the country rallied behind this national mission to defeat the USSR in the space race. The new technologies required for this endeavor, while perhaps not well understood by the masses, were nonetheless encouraged and certainly not feared. Computers were seen as both necessary and desirable. Technology curricula increased and enrollments in engineering schools reached all-time highs. The IBM S/360 was viewed by many as the epitome of this embracing of technological pursuits.

How S/360 Impacted Systems Management

Two significant criteria for effective systems management were advanced with the introduction of the S/360: managing batch performance and improving availability. Prior to the S/360, most computers required extensive changes in order to run different types of programs. As previously stated, these changes were often error-prone, time-consuming, and labor-intensive. With the S/360, many of these changes became unnecessary and consequently eliminated. Multiple compilers, such as COBOL and FORTRAN, could run simultaneously, meaning that a variety of different programs could run at the same time without the need for any manual intervention.

The immediate benefit of this feature was improved throughput and turnaround for batch jobs. Being able to manage batch performance was one of the key requirements for systems management and the S/360 provided IT professionals many options in that regard. Individual input job classes, priority scheduling, priority dispatching, and separate output classes were some of the features that enabled technicians to better control and manage the batch environment.

The introduction of a software-based job-control language (JCL) reduced the manual changes needed to run multiple job types and increased the availability of the machine. Improving the availability of the batch environment was another key requirement of systems management. Manual changes, by their very nature, are prone to errors and delays. The more the changes are minimized, the more that costly outages and downtimes are minimized.

Referring to the last four features listed in Table E-1, we see that performance and reliability were helped in additional ways with the S/360. State-of-the-art integrated circuitry greatly decreased the cycle times of digital switches within the computer, resulting in much higher performance. At that time, cycle times were in the tens of microseconds, or one millionth of a second. Within a few years they would be reduced to nanoseconds, or one billionth of a second. This circuitry was sometimes referred to as third generation, with vacuum tubes being first generation and transistors being second. Following this came a fourth generation, based on highly integrated circuitry on chips which had become so tiny that one needed microscopic equipment to design and manufacture them.

Real Life Experience—Early Recognition of Voice Recognition

During a high school field trip in the late 1960s, I visited the Electrical Engineering (EE) department at Purdue University. A professor there showed us the research he was conducting on voice recognition. He had an old manual typewriter outfitted with small, mechanical plungers on top of each key. When he carefully spoke a letter of the alphabet into a massively wired microphone that connected to each of the 26 plungers, the corresponding plunger would activate to strike the proper key to type the correct letter. We were impressed. He had worked years on the research and this was a breakthrough of sorts. A few years later, I had the same professor for a EE course and he relished explaining how he had been able to design the electrical circuitry that could recognize speech patterns. Voice recognition is very commonplace today, but occasionally I think back to that rickety old typewriter where much of the original research began.

As previously mentioned, independent channel programs improved batch performance by allowing I/O operations such as disk track searches, record seeks, and data transfers to occur concurrently with the mainframe processing data. I/O devices actually logically disconnected from the mainframe to enable this. It was an ingenious design and one of the most advanced features of the S/360.

Real Life Experience—Quotable Quotes

In researching material for this book, I looked for some insightful quotes to include. Dr. Gary Richardson of the University of Houston provided me with the following three gems.

“I think there is a world market for, maybe, five computers.”

—Thomas Watson, Sr., Chairman of IBM, 1943

“Computers, in the future, may weigh no more than 1.5 tons.”

—Popular Mechanics, forecasting advances in science, 1949

“I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year.”

—Editor in charge of business books for Prentice Hall, 1957

Another advanced feature of the system was the extensive error-recovery code that was designed into many of the operating system’s routines. Software for commonplace activities—for example, fetches to main memory, reading data from I/O devices, and verifying block counts—was written to retry and correct initial failed operations. This helped improve availability by preventing potential outages to the subsystem in use. Reliability was also improved by designing redundant components into many of the critical areas of the machine.

Throughout the late 1960s, technical refinements were continually being made to boost the performance of the S/360 and improve its reliability, two of the cornerstones of systems management at the time. However, the system was not perfect by any means. Software bugs were constantly showing up in the massively complicated OS/360 operating systems. The number of Program Temporary Fixes (PTFs) issued by IBM software developers sometimes approached 1,000 a month.

Nor was the hardware always flawless. The failure of integrated components tested the diagnostic routines as much as the failing component. But by and large, the S/360 was an incredibly responsive and reliable system. Its popularity and success were well reflected in its demand. To paraphrase an old saying, it might not have been everything, but it was way ahead of whatever was in second place.

Conclusion

Systems management is a set of processes designed to bring stability and responsiveness to an IT operating environment. The first commercial computers started appearing in the late 1940s. The concept of systems management began with the refinement of these early, primitive machines in the early 1950s. Throughout that decade, computers became specialized for either business or scientific applications and became prevalent in government, industry, and academia.

The specialization of computers became a two-edged sword, however, hastening their expansion but hindering their strongly desirable systems management attributes of high availability and batch throughput. These early attributes laid the groundwork for what would eventually develop into the 13 disciplines known today. Specialization led to what would become the industry’s first truly general-purpose computer.

The founding father of IBM, Thomas Watson, Sr., and particularly his son, Thomas Watson, Jr., significantly advanced the IT industry. The initial systems management disciplines of availability and batch performance began in the late 1950s but were given a major push forward with the advent of the IBM System/360 in 1964. The truly general-purpose nature of this revolutionary computer system significantly improved batch performance and system availability, forming the foundation upon which stable, responsive infrastructures could be built.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.170.65