3

Computer Science History

In this chapter, you’ll learn the technical details of every ­logical layer inside your computer, from what you see on your monitor to the electronic circuits that move bits of data. Learning this information is an exercise in decomposition. You’ll break down a highly complex system, the computer, into some of its smaller parts to understand them. Students familiar with the end-to-end operations of a computer will have deeper technical insights and appreciation of computer systems in general.

As we move down each layer of logic, we’ll travel through history, going back to times when those layers were primarily how humans worked with computers. We’ll go from the point-and-click interfaces we use today back to when programming required flipping electric switches and soldering circuits.

You’ll learn how each innovation hides the complexity of the layer below it and the importance of abstractions, names and symbols that allow users to interact with computers without having to know the complex details of how they operate. Students should appreciate how each generation of programmers developed abstractions that made computers more accessible to the generations that followed.

Students should realize how much they don’t know and can’t know about these immensely complex computing systems. The modern computer is a vast ecosystem of solutions built up through generations of innovators. Also, students should acknowledge that all people are ignorant in some areas of computer science and approach the subject with personal humility, sensitivity to peers who aren’t aware of certain facts, and deep gratitude for everyone who has contributed to making computer science more accessible for all.

The User Interface

Most of us are familiar with the icons that represent certain programs. But how often do we think about the real-world objects these icons are based on? Figure 3-1 shows some of the many icons that identify various applications.

Figure 3-1: The many metaphors used to abstract away computational complexity

For example, paper folders represent file locations, cogs and gears represent settings, and postal envelopes represent electronic mail. These icons use objects from the physical world to interface with algorithms and information architectures that are incomprehensibly complex to the human mind. These technical complexities become accessible to us through an interface that presents only the abstraction: a single icon we can click without a second thought.

So what does the Save icon do on the computer? When we click this icon, we send a command to the software we’re using, which in turn communicates with the computer’s operating system to save the file. The operating system takes the contents of what we’re saving from working memory and sends a command to the hard drive controllers to store the contents in its long-term memory along with a directory address. The long-term memory hardware, such as a hard drive or flash drive, commits the contents in bytes of bits stored along various physical locations within the device.

This complex, highly technical chain of events is completely abstracted away from the end user. The user simply clicks a disk icon, watches an indicator spin, and moves on to initiating the next mind-bafflingly complex sequence of processes hidden behind this wall of abstractions.

What might surprise your students is that behind these abstractions is another layer of abstractions. These icons might trigger programming functions in the code with names like loadContactList(), checkForNewMessages(), or plotNavigation­BetweenTwoPoints(), themselves representations that abstract away complexity for easy use. Figure 3-2 shows the many levels of coding abstractions, starting with the user interface (UI) and descending to the machine’s architecture.

Figure 3-2: The levels of code and languages between the user and the computer’s hardware

In this diagram, we see many levels of abstraction between the user and the computer’s hardware. As we move down the levels, the code examples grow increasingly challenging to understand because the syntax more closely conforms to the computer’s architecture and hardware configuration. In the next section, we’ll look at the code just beneath the UI.

High-Level Code

Just below the UI is the high-level code, code where the syntax is more legible to humans than the code further down the stack. This is the code your students will work with most often in class and in the professional world. Listing 3-1 shows an elegant bit of high-level code that draws polygons in the web browser. Comments, human-readable annotations the computer ignores, are included after each // in the code to explain some of the functions.

var drawPolygon = function(context, x, y, radius, sides, color) {
  //context is our cursor
  //translate moves the cursor to coordinates x,y
  context.translate(x,y);
  //Move the cursor out the length of the radius
  context.moveTo(radius,0);
  //Calculate the angle between sides
  var a = (Math.PI * 2)/sides;
  //For each side...
  for (var i = 1; i < sides; i++) {
  //Draw a line between x, y coordinates
  //according to the calculated angle
    context.lineTo(radius*Math.cos(a*i),radius*Math.sin(a*i));
  }
  //Close the drawing path.
  context.closePath();
  //Set the fill color
  context.fillStyle = color;
  //and fill with that color
  context.fill();
}

Listing 3-1: High-level JavaScript code for rendering polygons

What we see in Listing 3-1 are many high-level functions performing what might appear to be some very simple operations. A cursor is in the context, and using translate() and moveTo() functions, we position the drawing point. Then we use the Math.cos() and Math.sin() functions to calculate the start and end point angles for each side. Finally, we draw the lines with the lineTo() function and fill the shape with the requested color using the fill() function. In just 11 lines, we perform some fairly complex calculations to draw any polygon in any color requested.

What we don’t see here—what has been abstracted away from us—is all the messy detail of how the computer executes these functions. For example, the lineTo() function tells the computer to draw a line between point A and point B. But the computer must calculate where those points are in the canvas, where that canvas is in the browser window, where that browser window is on the desktop, and what the dimensions of the desktop are. Then it has to interface with the monitor and graphics card before changing the color of each pixel between them to our chosen color. Incredible amounts of math are going on behind the scenes of this code, from the monitor down to the electronic circuits, that we don’t have to think about. The high-level code insulates the programmer from having to worry about the machine the code is running on. Instead, it lets programmers focus on what their program is trying to accomplish; however, the computer still needs code to communicate with the hardware.

Low-Level Code

In the many layers of interfaces between the human and the computers’ circuits, low-level code is where the hardware-specific operations are defined. Low-level code is often machine specific and will reference particular memory addresses, storage peripherals, or processor functions. As a result, it can also be quite difficult to read and requires a deep familiarity with the hardware architecture your computer uses.

Assembly Language

Even low-level code offers some human readability. Assembly language is low-level code but still uses symbolic commands. It’s different in that it must operate very strictly within the computer architecture in which it executes. Listing 3-2 shows a function in assembly language that will add one to the number it’s given and return the result.

def add_one(n)
  pushq %rbp       
  movq  %rsp, %rbp
  addl  $1, %edi   
  movl  %edi, %eax
  popq  %rbp
  retq
end

Listing 3-2: Assembly language code for adding one to a number

The def add_one(n) line defines the name of the function that accepts the argument n. The pushq and movq lines set up a new stack for the function to run on specific registers in this hardware architecture, addresses for specific locations in the hardware’s memory. The addl line adds a long integer: the first argument adds one and the second, %edi, refers to the register holding the value of n. The movl line moves the new value for n from the %edi register into the register holding the return value, %eax. Finally, the popq and retq lines free the memory and return the computer to where it was in the program before the function was called.

The equivalent function in high-level programming code would look something like this: n = n +1; or n++;. It takes eight lines of cryptic assembly language code to accomplish what we can do in one line of high-level code. Similarly, the high-level code example in Listing 3-1 could draw any polygon in just nine lines of code, whereas the same task in assembly code would take many more lines than that.

It takes a lot of assembly code to tell the computer where to specifically store and retrieve each piece of data. Figure 3-3 is an iconic photo of Margaret Hamilton, director of the Software Engineering Division at the MIT Instrumentation Laboratory at the time, standing next to a printout of the assembly code for the Apollo Guidance Computer (AGC), alongside the LEGO figure honoring her.

Figure 3-3: Margaret Hamilton (left) during her time as lead Apollo flight software engineer, standing next to listings of the actual Apollo Guidance Computer (AGC) source code (Photo: Draper Laboratory, 1969). Reconstruction of the iconic photo (right) from the “Women of NASA” LEGO set.

This stack of assembly language code is as tall as the programmer. When we write high-level code, it’s important to appreciate that there are extensive libraries of assembly code like this making each function call possible. On the human side, this photo of Margaret Hamilton is iconic for how it puts a relatable human face on something as technically complex as flying to the moon. Even in the complex code, programmers find ways to convey personality and levity. Listing 3-3 shows some sample lines of assembly code from the Apollo code repository. After each hash mark (#) symbol are comments, which are explanations of the code for human benefit that the computer won’t read.

FLAGORGY TC       INTPRET   #  DIONYSIAN FLAG WAVING
         BZF      P63SPOT4  #  BRANCH IF ANTENNA ALREADY IN POSITION 1
        
         CAF      CODE500   #  ASTRONAUT:     PLEASE CRANK THE
         TC       BANKCALL  #                 SILLY THING AROUND
         CADR     GOPERF1                  
         TCF      GOTOP00H  #  TERMINATE
         TCF      P63SPOT3  #  PROCEED        SEE IF HE'S LYING

         TC       POSTJUMP  #  OFF TO SEE THE WIZARD ...
         CADR     BURNBABY

         CAF      V06N43*   # ASTRONAUT:  NOW LOOK WHERE TO ENDED UP

Listing 3-3: Sample code from the Apollo computer

There’s a lot of humor to be found among the cryptic commands in Listing 3-3. The GOTOP00H command is a reference to Winnie the Pooh, which was the name of the root program. The FLAGORGY command, probably an alert for erratic behavior, has a comment referencing Dionysus, the god of wine and fertility, who is also the antonym of Apollonian. The comment PLEASE CRANK THE SILLY THING AROUND describes the intent of the CODE500 message if the antenna isn’t in its proper position , and the SEE IF HE'S LYING verifies the position again. Just before ignition, we see OFF TO SEE THE WIZARD ..., followed by the command BURNBABY and V06N43* references when the lunar lander should be on the moon, with the comment NOW LOOK WHERE TO ENDED UP. You can share this code with your students to highlight the human side of coding. Even when the code can be a matter of life and death in a mission to get astronauts safely to the moon, there is room for levity and personal expression.

As cryptic as the assembly language code is, even it abstracts away complexity. The memory registers that the code references in the computer are labels, and the commands it executes are named for human understanding. Even these instructions must be further translated into information the computer can decipher.

Machine Code

Although assembly code is hardware-specific and written to work with certain memory addresses in the computer architecture in which it runs, it’s still working with human-friendly abstractions and manipulating blocks of data. At the lowest level, programming code manipulates the most discrete units of information in the computer, the bits, which are either one or zero. Machine code, a strictly numerical programming language, is used at this level to work with these bits.

Reading machine language code is like reading the atoms in a DNA molecule. Listing 3-4 shows an example of binary machine code used to store a text string. You can imagine the challenges of working in such an opaque syntax.

01001000 01100101 01101100 01101100 01101111
00100000 01010111 01101111 01110010 01101100 01100100

Listing 3-4: "Hello World" in ASCII binary code

Bill Gates and Paul Allen wrote a version of the BASIC programming language for the 1975 MITS Altair 8800, an early microcomputer that was widely popular despite being meant only for hobbyists. Gates and Allen had to load their BASIC interpreter into the machine using a set of binary commands. In Figure 3-4, you can see that, instead of a monitor, the computer had only lights and switches for binary inputs and outputs.

Figure 3-4: Altair 8800 computer (Photo: National Museum of American History)

When describing the CPU as speaking in ones and zeros, yet another layer of complexity is abstracted away. Even the ones and zeros are abstractions representing the amount of electricity in a circuit.

Circuits

Ones and zeros, the bits of data that make up the strings of machine code, represent “on” and “off” settings inside the computer. The computer’s CPU, which performs all calculations, is a microchip with one to many integrated circuits (IC), which are microchips that contain sets of electronic circuits. Each IC is filled with billions of transistors. The electrical state of each transistor determines whether a bit is on or off—one or zero. If you look carefully, you’ll likely find a symbol on your computer that combines both values for a bit on one of its buttons, as in Figure 3-5.

Figure 3-5: Computer power button icon

Before the Nobel Prize–winning invention of the transistor, computers used vacuum tubes, which were circuits that resembled lightbulbs about the size of your thumb. They were large and energy-hungry produced a lot of heat, and burned out often. The first electronic general-purpose computer, the ENIAC, was made in 1946. It used 20,000 vacuum tubes, occupied 1,800 square feet, and weighed 30 tons. In comparison, today’s cell phones, which use ICs, have thousands of times more processing power than the ENIAC.

The first computer program was written for ENIAC by six women mathematicians: Kathleen McNulty, Frances Bilas, Betty Jean Jennings, Elizabeth Snyder, Ruth Lichterman, and Marlyn Wescoff Meltzer. It involved setting switches and plugs for various binary commands and values. At the time, the word computer referred to the job title of someone who crunched numbers. Only later did it become the name of the tool that would replace this occupation.

In Figure 3-6, you can get an idea of the size and complexity of the ENIAC’s interface with its many lights and switchboards representing the binary inputs and outputs.

Figure 3-6: Betty Jennings (left) and Frances Bilas (right) operating the ENIAC’s main control panel (Photo: ARL Technical Library, U. S. Army Photo)

Even transistors are abstractions. They represent logical operations. The first digital computer wouldn’t have been possible without the 1936 paper “A Symbolic Analysis of Relay and Switching Circuits,” the master’s thesis of an MIT student named Claude Elwood Shannon. In this milestone document, Shannon demonstrates that electric switches could be used to perform Boolean algebra, a branch of algebra in which operations manipulate true and false values. Boolean algebra was introduced by George Boole in his 1847 book, The Mathematical Analysis of Logic, and discussed in his 1854 book, An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities. Figure 3-7 shows some examples of logic gates, which model Boolean logic in a way that can be translated into circuitry.

Figure 3-7: Logic gates

The lines on the left of the symbols represent binary inputs and those on the right binary outputs. For example, the AND gate accepts two inputs and both must be 1 for the output to be 1. So 0 and 0, 1 and 0, and 0 and 1 will all output 0, whereas 1 and 1 outputs 1. The OR gate returns 1 if either input is 1. So 1 or 0, 0 or 1, and 1 or 1 will output 1, while 0 or 0 outputs 0. The NOT gate inverts any input, so an input of not 1 outputs 0 and not 0 outputs 1. Logic gates can be combined into complex configurations to model logical processes, which can then be constructed on a circuit board with the appropriate components.

This information explained the hardware piece of the computer puzzle. But before computer scientists could engineer machines that could automate logic, there were others who first imagined that such a thing was even possible.

Envisioning Thought Machines

Around the same time Claude Shannon was figuring out how to perform logical operations using electric circuits, Alan Turing, a polymath whose codebreaking skills saved millions of lives in World War II, was deciphering how discrete logical operations could combine into a computing system. In his paper “On Computable Numbers, with an Application to the Entscheidungsproblem,” Turing describes a hypothetical Turing machine. This machine could be a person or machine that reads symbols from a potentially infinite strip of tape; stores the state of the machine in a register, a reference to the human computer’s state of mind; looks up those symbols in an instruction table; prints an output; and moves to a new position along the tape according to the instructions. In other words, he described a very primitive CPU capable of processing a computer program. For this and other achievements, he is often regarded as the father of modern computer science.

Inventors dreamed of having machines perform cognitively taxing tasks long before Turing. Over the course of several decades in the 1800s, English polymath Charles Babbage proposed and attempted the construction of a mechanical calculator, which he called the Difference Engine. Later he constructed a general-purpose mechanical computer, which he called the Analytical Engine. Neither invention was successfully constructed in his lifetime. But the Analytical Engine had a memory store, the equivalent of a CPU, and was programmable with punch cards.

Ada Lovelace, the daughter of the poet Lord Byron, was a mathematician and writer who described her approach as “poetical science.” She was also a longtime friend of Babbage’s, who called her “the Enchantress of Numbers.” Ada was fascinated by the Difference Engine. When translating a foreign paper on Babbage’s proposed Analytical Engine to English, she supplemented the paper with extensive notes, even including a detailed algorithm to calculate the Bernoulli number sequence in the engine. Because of this algorithm, Lovelace is widely considered the world’s first computer programmer. Figure 3-8 shows a watercolor portrait of her that was adopted by the Ada Initiative, an organization focused on increasing the participation of women in open source technology and culture.

Figure 3-8: Circa-1840 portrait of Ada Lovelace by Alfred Edward Chalon

In her writings, Lovelace is clearly enchanted with the Analytical Engine. She marvels at how it manifests abstract mental processes in machine operations. She envisions a powerful language in machine code that will produce faster, more accurate analysis for the human race. She also references the engine’s predecessor, the Jacquard loom, which used punch cards to program fabric designs, saying the Analytical Engine weaved algebraic expressions the way the loom weaved flowers and leaves. In fact, when we look further back in time, we find that the weaving of logical expressions shares some of the same challenges and frustrations as weaving fabrics.

Ancient History

On a trip to Peru, I had the good fortune of visiting a mountain village where I learned of the importance of weaving in their culture. Without a written language, for thousands of years the people relied on ideograms, images used to communicate identifications of food, resources, warnings, or people. The local women wove the ideograms on tapestries, which we can think of as abstractions of the real-life things they represent, just like the icons in a computer’s UI.

In the village, they had recently purchased two simple looms, which required programming by hand. Figure 3-9 shows the programming code for the loom in the string arrangements with inputs of string that will later become design outputs.

Figure 3-9: A loom in Peru

This loom image shows a lot of complexity, and we can imagine how daunting a challenge programming it would be. Watching the women weave by hand was like watching a computer lay down pixels line by line and bit by bit. One woman explained that the new looms were faster, but it was frustrating when they set them up incorrectly and the patterns came out wrong. This is similar to the frustrations of programming when the software executes quickly, but getting the programming logic correct can be a challenge. In this village without electricity or running water, people were successfully taking on complex challenges that involved computational thinking and abstraction without a computer anywhere in sight.

Summary

In this chapter, we explored the many layers and innovations that make modern computing systems possible. With the UI as the starting point, we traversed down through the layers that included high-level programming code, low-level assembly language code, machine code, circuits, conceptual innovations like Boolean logic, and precursors to the computer, like programmable looms. Concurrently, we traveled back through time and met a few of the many innovators in computer science, such as Margaret Hamilton and her team at NASA; the ENIAC programmers; Claude Shannon; Alan Turing; Charles Babbage; Ada Lovelace; and the weavers from indigenous villages. All of these individuals are human beings with whom your students can identify, providing models from which students can see themselves working in computer science.

Additionally, by making students aware of the vast number of experts and innovations it took to make modern computers possible, you’ll teach them to respect the subject matter and understand that no one person can hope to know it all. Students should realize that it’s best to engage the subject with a personal humility and sensitivity to others, recognizing that everyone has blind spots when it comes to computer technologies and innovations. When students understand that computer science is complex for everyone and that the history of the field is the story of making computers more accessible to others over time through abstraction, they’ll hopefully find the subject more approachable.

In this history we saw how early computer science, which was built on circuit boards and abstract research papers, barely resembles the programming environments we work in today. Yet, as we will learn, the foundational elements of programming have remained the same over the intervening decades, and we don’t need modern computers to learn computer science.

In the next chapter, we will discover the many ways you can explore the basic building blocks of computer programming in the classroom without involving computers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.182.179