Some Light Relief—On Computable Numbers with an Application to the Entscheidungsproblem

Generics in Java got me thinking about methods and subroutines in general. Computer science is a young enough field that it might be possible to pinpoint where subroutines were first used, and maybe even who came up with the idea. Like the answer to “What was the first computer?” the origin of subroutines is more a question about definitions than about history. Depending on how you define it, the first computer was:

  • The ENIAC (Electronic Numerator, Integrator, Analyzer and Calculator/Computer [accounts differ]). Built by John Mauchley and J. Presper Eckert at the Moore School in the University of Pennsylvania, it was dedicated in February 1946. As originally built, it was not a stored program computer. ENIAC led to a follow-on project, EDVAC (Electronic Discrete Variable Automatic Calculator/Computer), and eventually a spin-off company that built Univac and morphed into the company of the same name (today known as Unisys).

  • The SSEC (Selective Sequence Electronic Calculator) in January 1948, built by Wallace Eckert of IBM (no relation to John Eckert). Programs were read from paper tape or plugboards, but the system had the ability to cache in memory, and in theory operate like a stored program computer some of the time.

  • The Manchester Mark 1 in June 1948, built by Fred Williams, Tom Kilburn and a team at Manchester University, England. This is the first machine that everyone recognizes as a computer, because it is the first system that stored its programs in the same memory used for data. The Manchester Mark 1 is unrelated to the Harvard Mark 1 built by IBM in 1943. The Harvard Mark 1 was a big calculator built out of relays, and driven by programs on paper tape (like a player piano).

  • The EDSAC (Electronic Delay Storage Automatic Computer) in May 1949 by Maurice Wilkes's team at Cambridge University in England. This was the first operational, full-scale stored-program computer. It was closely based on the EDVAC design.

There are other candidates for the title “first computer”, too. I would characterize the evolution as, before World War II—paper designs, during World War II—electronic calculators, and right after World War II—the first true computers. The Manchester Mark I is the one I am putting my money on as the very first true computer. They rebuilt the Mark I to commemorate its golden anniversary in 1998, and there's a plaque on the wall of the building where it was originally constructed. Last time I was in Manchester, England I took a photo of it.

One of the first programmers on the Manchester Mark I was legendary computer pioneer Alan Turing. Even before he joined the Mark I team in 1948, he sent them a routine to do long division. When I get time, I want to write a simulator in Java for the Mark 1— imagine the thrill of coding that retraces the footsteps of a great pioneer! Turing wrote the first ever Programmer's Handbook, some of which you can find online at http://www.computer50.org/kgill/mark1/progman.html.

Alan Turing had worked on systems that encapsulated all three phases of early computer evolution: paper designs, electronic calculators, and true computers. His 1937 paper, which I generically re-used for the title of this light relief (ho ho), On computable numbers, with an application to the Entscheidungsproblem appeared in summer 1936 in the Proceedings of the London Mathematical Society Vol. 42, pp. 230--265. It was very influential and ground-breaking work[1] . Original copies of this paper sell for $35,000, so if you find one in your home magazine rack, get down to eBay.

The Entscheidungsproblem (“decision problem”) is the challenge to find a general algorithm that can evaluate whether arbitrary statements in symbolic logic are valid or not. The statements are written in mathematical terms. Turing's contribution was to come up with a standard computing model that set aside implementation details and allowed the focus to be on underlying logic. His model came to be known as the Turing machine, and Turing proved in his paper that the Entscheidungsproblem was equivalent to trying to decide in advance if an arbitrary program for the Turing machine would halt or not. And he knew the answer to that problem!

It causes major pants-wetting excitement among mathematicians when they prove one deep problem is equivalent to another. Turing's breakthrough is a fundamental finding of theoretical computer science. He showed that it is undecidable to determine in advance whether an arbitrary computer program will halt or loop endlessly. This is formally known as the Halting Problem. When I read Turing's 1936 paper, I was looking for any mention of subroutines. To my joy, I found that he clearly understood and used parameterized subroutines.

Figure 15-1. In this laboratory in Manchester, England, on June 21, 1948, the world's first computer was powered up

image

Since the word subroutine hadn't been invented yet, Turing called them m-configuration functions. This is a paper design, not something that pushes electrons. Still, it's pretty cool that Turing used the concept at the dawn of computing. Turing had great success cracking Nazi Enigma codes during the war years, with a special purpose electrical calculator called the Bombe.

In February 1946, Turing wrote a design proposal for a project known as the Automatic Computing Engine, in which he unequivocally spelled out subroutines and their implementation. Because the terms “call” and “return” were not yet in use, Turing coined the terms “bury” and “unbury” for these actions. He also proposed a stack of return addresses, and now referred to subroutines as “subsidiary operations”.

Turing wrote:

“We also wish to be able to arrange for the splitting up of operations into subsidiary operations. This should be done in such a way that once we have written down how an operation is done we can use it as a subsidiary to any other operation. ... When we wish to start on a subsidiary operation we need only make a note of where we left off the major operation and then apply the first instruction of the subsidiary. When the subsidiary is over we look up the note and continue with the major operation. Each subsidiary operation can end with instructions for this recovery of the note.” A.M. Turing, “Proposals for the development in the Mathematics Division of an Automatic Computing Engine (ACE).” Report E882, Executive Committee, NPL February 1946. Reprinted in April 1972 as NPL Report Com. Sci 57.

So Turing invented the modern concept of the subroutine, but it was still only a paper proposal. The ACE project was delayed by resource constraints (Britain was still on rationing for eight years after World War II ended), bureaucracy, and Turing's limitations as a salesman. Eventually Turing gave up his ACE project and accepted an offer to move to Manchester and work on the prototype Mark I. Around this time, Turing hypothesized the “Turing Test” for artificial intelligence: if a human observer, conversing over a teletype with a computer, could not distinguish whether a person or a computer was at the other end, such a computer would have A.I. Unfortunately, this conjecture was a busted flush for Turing. The Turing test has been shown many times to be inadequate: people are just too ready to see patterns where there are none, and to confuse superficial appearance with reality.

Meanwhile, progress was forging ahead in Britain. The earliest subroutines that ran on real hardware were developed by David Wheeler for the EDSAC in Cambridge, working with M. V. Wilkes and Stanley Gill. The call/return became known as a “Wheeler Jump” and was set up as follows. Before you make the call, an instruction loads its own address into the accumulator. A second instruction will set up the return branch. It adds a predetermined constant to the accumulator to transform the address bit pattern into a branch to the first instruction's address plus some number of words (the length of instructions two to four) needed to cause a proper return branch. Note that you're working with a word that includes the op-code as well as the address portion, so you want to be careful about overflow. Then a third instruction stores the new “return” branch from the accumulator to an address at the end of the subroutine. Finally, you branch to the beginning of the subroutine!

David Wheeler was the chief programmer for the EDSAC project and went on to a career of great renown. He was the one who pointed out that all problems in Computer Science can be solved by another level of indirection. Wheeler also invented the concept of a bootstrap – a ROM program that is run every time the machine is turned on. The first bootstrap was put in the read only memory of the EDSAC system in August 1949. It was a very advanced bootstrap/loader which loaded the programs to be executed, and also allowed the programs to be relocated to different parts of memory.

Bottom line: We do know who invented the subroutine, which 55 years later led to generics in Java. Alan Turing invented the concept, and Dave Wheeler was the first to see it implemented on a real system. So the next time you write foo(), you know who to thank.


[1] Some say that John Von Neumann got the idea from Turing's paper of having one memory that would store data and instructions, and a “Von Neumann machine” should therefore be called “Turing's Other Machine”. Turing completed his PhD under von Neumann's supervision, so the big Hungarian was certainly familiar with Turing's work. Ideas likely flowed both ways, and they were both intellectual giants who deserve enormous credit as computer pioneers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.167.195