© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
J. BartlettProgramming for Absolute Beginnershttps://doi.org/10.1007/978-1-4842-8751-4_2

2. A Short History of Computers

Jonathan Bartlett1  
(1)
Tulsa, OK, USA
 

The history of computers is weird and wonderful. What started as an abstract philosophical quest ended up setting the course for society for over a century and continues to be one of the most profound parts of modern life. The goal of this chapter is to trace an outline of where computing started, where it has been, and where it is now.

2.1 The Prehistory of Computers

Humans have always had tools. Humans have built fires, made spears, and built houses from the beginning. At first, however, technology was limited to standing structures or tools that were extensions of yourself—like knives or bows and arrows. Very little early technology was powered and free-functioning. It was manually powered by human effort. Therefore, since the power of a machine was limited to what humans could drive, only small machines could be devised.

The ability to power a machine led to huge advances in technology. The earliest power source was probably water, where water could turn a wheel to grind wheat or operate a sawmill. These water-based power sources, however, were fairly limited in the types of devices they could drive. Such technology was mostly limited to standing wheel-based inventions.

This was essentially the state of technology from about 300 BC to the early 1700s AD. At this point in history, technology had two main limiting factors. The first was limitations of power availability, and the second was the need for customized parts. The industrial revolution solved both of these problems. The steam engine allowed the creation of powered machines anywhere. Powered machines were no longer tied to being near streams but could now go anywhere, since the power could be generated from fire and stored water. Eventually this even allowed the creation of trains, since the power could move with the vehicle.

The other invention of the industrial revolution was interchangeable parts. This allowed a standardization and maintenance of equipment that was previously unattainable. Instead of having each part be a unique piece, the parts became standardized which allowed for the machines to become more specialized. It is one of the more curious paradoxes of technology that as the pieces of technology become less unique, the more advanced and unique the systems created from those parts can become. Standardization allows for users of technology to stop having to think about all of the low-level decisions and focus on the larger, more meaningful decisions. This also allows for better communication about systems, because the parts can be more readily described. If I can give you a schematic that lists premade parts, it is much easier to design and communicate that design than if I also had to describe how each individual part was supposed to be made.

So the introduction of available powered machinery and standardized parts in the industrial revolution led to an explosion of specialized machines. We then had machines to perform any number of tasks that a person could want to do. The next step was the introduction of machines which were directed not by people directly controlling the machine but by coded instructions. The earliest of these machines was the Jacquard Loom, which used punched cards to signify a pattern woven into a fabric. The cards had punched holes to signify to the machine the raising or lowering of the particular thread causing it to be visible or hidden in the pattern. Thus, the loom could be programmed to make a pattern by specifying at each point whether each thread should be raised or lowered.

Later inventions applied this concept to mathematics. Calculating machines had been around for a long time, with Blaise Pascal’s mechanical calculator having been invented in the mid-1600s. However, this required the power of physical manipulation to actually accomplish the addition. Most mathematical tasks are not single-step like addition but require a process of several steps, sometimes repeating steps, before finding an answer. Charles Babbage invented a more advanced machine to perform navigational calculations. In this machine, the user entered the input, and then the machine used that input to run a series of steps which eventually yielded results. Babbage eventually designed a machine that could take a list of arbitrary instructions much like a modern computer, but he was never able to build that design.

Once humans had the ability to power a machine, create a machine that operated on external instructions, and use those instructions to perform mathematical functions, they had all of the pieces in place to create a computer. However, the revolution that brought about computing took place not from an invention, but from a problem in philosophy.

2.2 The Idea of a Computer

What separates modern computers from the calculating machines of the past is that modern computers are general-purpose computers. That is, they are not limited to a specific set of predesigned features. I can load new features onto a computer by inputting the right program. How did we get the idea of creating such a general-purpose machine?

It turns out that a question in philosophy led to the creation of general-purpose machines. The question was this—was there a way to create an unambiguous procedure for checking mathematical proofs? This seems like an odd question, but it was a big question in the nineteenth century. There had been many “proofs” where it was unclear if the proof actually proved its subject. Thus, philosophers of mathematics tried to find out if there was a way to devise what was then called an “effective procedure” for checking the validity of a mathematical proof. But that leads to another question—what counts as an “effective procedure” anyway? If I list out the steps of a procedure, how do I know that I’ve given you enough details that you can accomplish this procedure exactly as I have described it? How can I tell that my instructions are clear enough to know that the procedure that I have listed can be unambiguously accomplished?

Alan Turing and Alonzo Church both tackled this problem in the 1930s. The results showed that one could define unambiguous procedures with the help of machines. By describing a machine that could perform the operation, one can be certain that the operation of the procedure would be unambiguous. In addition, Turing described a set of operations which could be used to mimic any other set of operations given the right input. That is, Turing defined the minimum set of features needed for a computing system to become truly programmable—where the programmer had an open-ended ability to write whatever software he or she wanted. Machines and programming languages that are at least as powerful as Turing’s set of features are known as Turing-complete or Universal programming languages. Nearly every modern programming language in common usage is Turing-complete.

It is interesting to note that the creation of computing came from a question in philosophy. Many are eager to dismiss the role of philosophy in academics as being impractical or unimportant. But, as we see here, like all truths, philosophical truths have a way of leading to things of deep practical importance.

And what happened to the original question—can you develop an effective procedure for checking proofs? The answer is, strangely, no. It turns out that there are true facts that cannot be proven via mechanical means. But to learn that answer, we had to develop computers first. Of course, that leads to another interesting intersection between computers and philosophy. If there are true facts that cannot be mechanically proven, how could we know that? The only way must be because our minds cannot be represented mechanically. This puts a limit on the potential capabilities of artificial intelligence and shows that even though computer programmers have developed some very clever means of pretending to be human, the human mind is simply outside the realm of mechanism or mechanistic simulations.1

2.3 The Age of the Computer

Shortly after Turing described the necessary feature set for computers, people began to build them. Probably the first Turing-complete machine was Konrad Zuse’s Z3 computer, built in 1941. Although the Z3’s operating principles were somewhat similar to modern computers, the Z3 was still largely a mechanical device. The first general-purpose, digital electronic computer was the ENIAC in 1946 and was about a thousand times faster than its mechanical predecessors. It should be noted that the ENIAC was the size of a very large room, but it had roughly the same processing power as a scientific calculator. Its main jobs included performing calculations for the production of the hydrogen bomb and calculating tables for firing artillery.

The next generation of computers introduced what is normally termed the von Neumann architecture, which means that the computer had a single memory area which held both programs and data. This is based on the fact that both a program and the values that the program generates can both be represented by numbers. Therefore, the same memory can be used both for the program that tells the computer what to do and for the data that the program generates and operates on. This makes the computers much easier to program and use, which led to the ability to sell computers commercially. The first commercially available computer to implement this idea was the Manchester Mark 1. The first mass-produced computer was the UNIVAC I, followed shortly after by IBM’s 650. These computers were still massive in size but contained less memory storage space than a single graphic on a modern computer. The UNIVAC I was the first computer to have an external tape storage, and external disk storage (similar to modern hard drives) followed soon after.

The next move for computer hardware was toward miniaturization. The original computers used large devices called vacuum tubes to perform data processing (see Figure 2-1, left column). These vacuum tubes would allow or not allow current to flow based on whether other wires had current flowing through them or not. Combinations of many of these tubes could allow for data to be stored as current flow, for mathematical operations to be performed on such data, and for the data to be moved around.

After the vacuum tube came the invention of the transistor (see Figure 2-1, middle column). Transistors generally have three wires, where the middle wire controls whether the electricity can flow between the other two wires. As with vacuum tubes, transistors can be wired together to create digital computer memory, digital computer logic operations, and digital information pathways. Transistors, while they performed the same basic functions as the vacuum tube, were able to do so in a much smaller package and operating on a lot less power. Transistors allowed much smaller devices to be built which also required almost 1,000 times less power. The Metrovick 950, released in 1956, was the first commercial computer that operated on this principle.

Miniaturization continued with the advent of integrated circuits or what are often called microchips or just chips (see Figure 2-1, right column). An integrated circuit basically allows for miniaturized transistors to be stored on a small, single plate of silicon. When integrated circuits were first introduced, they only had a few transistors. Today, integrated circuits come in a variety of sizes, and the ones used for desktop computing can hold billions of transistor equivalents on a 2-inch square chip. Integrated circuits basically brought computers as we know them into the world. However, when they were first introduced, they were primarily used by very large businesses.

In the 1960s, Douglas Engelbart led a research team to look at the future of computing. In 1968, Engelbart presented what has been termed “the mother of all demos,” which predicted and demonstrated nearly all aspects of modern personal computing, including graphical interfaces, networking, email, video conferencing, collaborative document editing, and the Web. This served as an inspiration for a number of companies to start pushing to make this vision of computing a reality. Engelbart had accomplished it in a lab, but others were needed to make it a commercial reality.

Set of three photographs of the vacuum tube, transistor, and chip.

Figure 2-1

Advancements in Computer Hardware Miniaturization

The picture on the left is of a vacuum tube (photo courtesy of Tvezymer on Wikimedia). Vacuum tubes are still around today, primarily for audio applications. The picture in the middle is of a transistor. Transistors were much smaller, required fewer materials to produce, and used much less power but still did largely the same job as vacuum tubes. The picture on the right is a modern microchip used in appliances (photo courtesy of Vahid alpha on Wiki-media). Such a microchip contains the equivalent of a few hundred thousand transistors.

The first recreational personal computer was the Altair, and the first commercial personal computer was the Apple I which came out in 1976, after which a flood of personal computers entered the market. IBM eventually entered the market, with Microsoft providing the software for the computer. The interfaces for these computers were usually text-only. However, eventually Apple released the Macintosh, which inaugurated the age of graphical user interfaces. Shortly after, Microsoft released Windows, which brought the graphical interface to the IBM side of the personal computer world.

2.4 Computers in the Age of Networks

Thus far, computers had been largely isolated machines. You could share files through disks, but, by and large, computers operated alone. When you link together two or more computers, it is called a network. Though networking technology had been around for quite a while, it had not been cheap enough or popular enough to make an impact for most personal computer users.

For most users, networking started with office file sharing systems, usually using a type of local networking called Ethernet which runs networking services over specialized networking cables. People would use applications that were installed on their own computers but store the files on a server so that the other members of the office could access it. A server is a computer on the network that provides one or more services to other computers and users on a network. A software program that accesses a server is often called a client. Many office networks eventually added groupware services to their networks—local email and calendar sharing systems that allowed the office to work together more efficiently. While smaller organizations were focused on local services such as file sharing and groupware, larger institutions were also at work linking networks together. This allowed organizations to share information and data between each other more easily.

At the same time, a few home computer users started reaching out to each other through the phone system. A device called a modem allowed a computer to access another computer over standard telephone lines. Services called bulletin-board systems (known as a BBS) started popping up which allowed people to use their computers to access a remote computer and leave messages and files for other users.

These developments laid the groundwork for the idea of the Internet. At the time it was developed, there were many different, incompatible networking technologies. Organizations wanted to connect their networks to other organizations’ networks but were finding it problematic since everyone used different types of networks. In the 1970s and 1980s, DARPA, the Defense Advanced Research Projects Agency, developed a way to unify different types of networks from different organizations under a single system so that they could all communicate. This network, known as ARPANET, became very popular. Other large, multi-organizational groups started using the design of ARPANET to create their own network. Since these networks all used the same basic design, they were eventually joined together to become the Internet in the late 1980s.

The 1990s witnessed the rise of Internet Service Providers, or ISPs, which provided a way for computer users to use the modems that they used to use for bulletin-board systems to connect their computers to the Internet. Instead of using a modem to connect to a single computer, like they did with bulletin-board systems, the ISP allowed a user to use their modem to connect to a whole network. This began the mass public adoption of the Internet by both individuals and organizations of all stripes.

In the early days of the Internet, the speed of the network was very slow, and only text could be transmitted quickly. Eventually, modems were replaced with more advanced (and faster) ways of connecting to the Internet, such as DSL, cable, and fiber. This allowed more and more complex content to be transmitted over the Internet. Also, because these technologies do not tie up a phone line, they can be used continuously, rather than intermittently. In addition, wireless technologies, such as WiFi and cellular-based networking, allowed users to connect to the Internet without being tied down by cables. These developments together led to the near-ubiquitous availability of the Internet that we have today.

So, today, nearly all computer software is built with the network in mind. In fact, much of the software that people use on a daily basis operates not on an individual computer, but over a network. This allows for users to access software programs no matter where they are or what computer they are using. It has also changed software development so that the focus of computer software is no longer on individuals and individual tasks but on organizing groups of people.

2.4.1 Review

In this chapter, we covered the basic history of computers. We have learned the following:
  • Humans have used tools to accomplish tasks from the beginning.

  • Early tools were limited by available power options.

  • Advances in power technology allowed for the improvements and industrialization of tools.

  • Standardization of parts allows for more complex machines to be built and serviced.

  • Electricity allowed for the movement of power to any needed location.

  • The ability to control a machine via instructions, such as the Jacquard Loom, allowed for the creation of more general-purpose tools which could be specialized by providing the right sets of instructions.

  • Alan Turing and Alonzo Church identified the logical requirements for making general-purpose computations.

  • Several early computers were built around the idea of a general-purpose calculating machine.

  • Advances in electronics allowed for storage of millions of transistors onto a single microchip.

  • The availability of microchips led to the era of personal computing.

  • The increased usage of computers in organizations eventually led to the need to have better means of communication between computers.

  • Networks were invented to allow computers to be hooked together to share file and messages.

  • The isolated networks around the world were eventually unified into a single Internet-work, known as the Internet.

  • The growth of the Internet combined with the ability to access the Internet wirelessly has made the Internet a primary factor in computer usage.

  • The ubiquity of the Internet has led programmers to start designing applications with the network in mind first, rather than as an afterthought.

2.4.2 Apply What You Have Learned

  1. 1.

    Take some time to think about the history of technology and the Internet. What do you think is next on the horizon for technology?

     
  2. 2.

    The pace of technology appears to have been accelerating over the past century. What do you think has caused this acceleration?

     
  3. 3.

    Pick your favorite piece of technology mentioned in this short history and research it. What inspired the person who developed it? What other inventions came after it? Was it successful? Write a few paragraphs describing the technology you have chosen, how it functioned, and how it impacted the future of technology.

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.94.202.151