Chapter 1

Introduction

We briefly analyze the relationship between logic and computer science, by focusing successively on two points of views, in a somewhat natural order. We first underline the importance of logic in the foundations of computer science and how it is used in many computer science domains. Then we explain why logic is useful for computer engineers.

1.1. Logic, foundations of computer science, and applications of logic to computer science

Trying to underline the importance of logic for computer science in the 21st Century is the same as trying to reinvent the wheel. Indeed, in 1969, C.A.R. Hoare wrote:

Computer programming is an exact science, in that all the properties of a program and all the consequences of executing it can, in principle, be found out from the text of the program itself by means of purely deductive reasoning.

More recently, in 1986, Hoare stated that “computers are mathematical machines and computer programs are mathematical expressions”.

Of course, in our defence of logic, we will not make use of arguments relying on Hoare’s renowned expertise, as these arguments may turn out to be fallacious; however, we attempt to shed some light on this kind of sentence and to convince the reader that the importance of logic for computer science is a fact and not a matter of opinion.

Logical concepts have been of prime importance in computer science, and this is still the case nowadays.

Instead of making a potentially tedious enumeration, we shall simply mention two typical concepts of computer science, production rules and formal languages, the origins of which are seldom mentioned and which were, respectively, invented by logicians in 1921 (Post) and 1879 (Frege).

We cannot overstate the importance of studying the foundations and the history of a discipline, as proved by the following quote.

In 1936, A. Turing introduced his notion of an abstract computer, as part of his solution to one of the problems posed by D. Hilbert. Yet, H. Aiken (one of the pioneers of computer science) wrote in 1956 (as quoted by M. Davis):

If it should turn out that the basic logics of a machine designed for the numerical solution of differential equations coincide with the logics of a machine intended to make bills for a department store, I would regard this as the most amazing coincidence that I have ever encountered.

Such a statement would make any undergraduate-level computer scientist smile today.

Another point of view, which does not have such a good press currently, is the philosophical point of view. To convince oneself of its importance, it suffices to recall the role of intuitionisms in computer science (constructive program synthesis, etc.). Several philosophical questions arise naturally in elementary logic (paradox, truth value, possible worlds, intention, extension, etc.).

The importance of temporal logic in modern computer science is undeniable, and is due (including for philosophical motivations) to philosophers-logicians such as A. N. Prior (see section 10.4).

The philosophical point of view is essential to understand a topic, and of course, understanding is crucial from a practical point of view.

An example is the popular term ontology. J. McCarthy borrowed it in the 1970s for philosophy. Currently, in computer science and artificial intelligence (AI) the meaning of this term has connections (even though they may not be that obvious) with its original philosophical meaning, i.e. “Theory of being as such – the branch of metaphysics that deals with the nature of being, as opposed to the study of their memberships or attributes”.

We conclude this section by mentioning three so-called theoretical topics, of the utmost practical importance:

– The NP-completeness of the consistency problem (satisfiability and validity) of classical propositional calculus (SAT), which was proved by Cook in 1971. The wide array of applications of propositional calculus (verification of critical systems, intelligent systems, robotics, constraints, etc.) provides an idea of the importance of this result.

– The study of finite structures, which is closely related to computer science and has numerous applications in databases, multi-agent systems, etc.

Non-classical logics (modal, temporal and multi-valued logics) are extremely useful in, e.g. program analysis and knowledge representation (in particular in distributed knowledge representation).

1.2. On the utility of logic for computer engineers

Although no one could say for sure which concepts and techniques will be useful to a computer scientist in, say, 10, 20, 30, or 40 years, or even if computer scientists will still be around by then, human beings will probably not have changed that much and will have to cope with an increasingly complex world (at least in developed countries). Abstract concepts will turn out to be more and more useful from the point of view of understanding and efficiency (e.g. it suffices to keep in mind all the advantages – conception, encoding, and updating - that offer very high-level languages).

It is also important to remember that one of the most efficient and rewarding recipes to success from an economical point of view, especially in the modern world, is originality. History shows that the original ideas that have led to important progress in science and techniques are mostly a consequence of an in-depth analysis (and possibly a revision) of principles.

Some of the fundamental notions of logic are used by computer engineers (sometimes with different names and presentations). For example, proposition, definition, semantics, inference, language/meta-language, intention, intension, extension, subclass of formulas, and model.

The notion of proposition from an intuitionistic point of view, for example, represents an intention, a task or a problem. The notion of definition, which has been studied for centuries by logicians, is closely related to that of specification and is used in the process of program construction (folding and unfolding rules).

The role of logic as a language for software specification/verification/development is unchallenged. The notion of semantics leads directly to that of compilation (the assignment of a semantics can be viewed as a translation). Inference is a way of expliciting information and is therefore strongly related to algorithms design. The notion of intention is related not only to that of specification but also to that of system modeling (and not only program modeling), in which it is also necessary to model the environment (including users). The notions of intension and extension are used, for example, in languages such as Datalog (studied in logic and databases). The notion of decidable subclasses of formulas is an example of a problem whose nature changes when one of its subproblems is considered. The notion of models in the context of abduction (which is used for example in diagnostics, or in the understanding of natural languages) is similar to that of empirical sciences.

These are, of course, very general concepts, but let us mention some concrete examples that have extremely important practical applications. The Davis-Putnam method is used, for example, in system validation. The notion (algorithm) of subsumption is used, among other things, in knowledge representation languages.

This notion is essential in so-called ontologies (considered as sets of concepts, relationships between these concepts and means to reason on these concepts), which are widely used in taxonomies and sometimes defined as the explicit specification of a simplified and abstract view of a world we wish to represent.

The notion (algorithm) of unification has been at the intersection of logic and computer science for many years. This algorithm is specified in a very natural way as a set of inference or rewriting rules (see section 4.2). Databases are one of the traditional areas of application of computer science. We can view a relational database as a first-order structure, and first-order logic as a query language (see remark 6.1). The search for increasingly powerful query languages shows the practical need for logics with a greater expressive power.

These examples should be enough to convince the reader of the importance of logic for computer engineers.

To answer those for whom the incentive to learn logic can only come from its recent applications or its relationship to recent applications, we enumerate some applications that have been receiving increasing attention.

The first application is multi-agent systems, which are used in important domains such as robotics, the Internet, etc. These systems can possess extremely complex configurations, and it is becoming necessary to have (formal) techniques to reason in (and on) these systems. The notions and tools that are used include temporal logic, knowledge representation logics, deduction, abduction (i.e. the discovery of hypotheses that are capable of explaining an observation or permit us to reach a conclusion), and the notion of proof that can be used to convince an agent (in particular a human agent) of the pertinence of a piece of information, etc.

Modal logics (temporal logic, dynamic logic, etc.) are used in the industry, for example, to capture reactive behaviors, or in concurrent systems, etc.

An increasing number of computer engineers now work in the economic and financial industry. In these disciplines, the modeling of the actors and their knowledge (beliefs) of the other actors (on the market, in the society) is indispensable for comprehension and decision. Assertions such as “X knows (believes) that Y knows (believes) that Z knows (believes) that…” can be treated formally by knowledge (belief) logics.

In a science popularization article that was published in a well-known journal in May 2001, we could read:

The semantic web is…an extension of the current one.

[…]

For the semantic web to function, computers must have access to structured collections of information and sets of inference rules that they use to conduct automated reasoning.

Logic is very important for natural language processing, which has many applications (e.g. on the Internet, for information retrieval, in question answering systems, etc).

Interdisciplinarity has reached the most recent computer science applications, such as multimedia indexing (i.e. the way of finding multimedia documents in digital libraries), where the roles of semantics and inference are essential.

We conclude this section with some considerations that are more directly related to the technical problems that arise in computer system programming.

– It is possible to prove that a program is not correct, but it is generally impossible to prove that a program is correct using examples. The importance of detecting potential errors in programs (or in integrated circuit designs) is evidenced by two dramatic examples that took place 30 years apart.

The spacecraft Mariner 1 (July 1962) was meant to land on Venus but failed because of an error in the on-board program (a syntactically correct instruction, very close to the intended one, had an entirely different meaning).

Logical techniques have been developed to prove program correctness, to detect programming errors and to construct programs that satisfy a given specification. Most of these techniques can be performed automatically. To reason automatically on a program, it is necessary to have a formal semantics, a formal logical theory, and an automated theorem prover for this theory.

In general, we are interested in verifying specifications, i.e. in proving properties satisfied by the specifications rather than in proving properties satisfied by the programs themselves.

Nowadays, people use more and more certified software. The practical importance of this notion cannot be overstated (it suffices to reflect on the importance of having certified software in a plane or a hospital). More recently (mid-1990’s), errors were discovered in a digital circuit that had already been commercialized. It had been sold by the biggest manufacturer of digital circuits. This error occurred when rare data were provided to the circuit.

This error served as an argument to those who advocated for the necessity of replacing simulation with exhaustive tests by formal verification, so as to prove the correctness of a design before the construction phase.

Theorem proving and model checking are two techniques that enable us to certify the correctness of the aforementioned digital circuits. Logic (both classical and non- classical) plays a key role in these two approaches.

Some resounding successes in software and hardware engineering have proved that these approaches can indeed be applied to real-world problems.

Formal verification had typically been a neglected domain because it was considered too theoretical, but now, it can have a huge financial impact.

– Logic programming consists of using logic as a programming language. The programming language Prolog, as well as others that followed, in particular, constraint logic programming languages, arose naturally from the notions and methods that are studied in logic.

– Some logical paradoxes, in particular, the one named Russell’s paradox, are closely related to the halting problem, i.e. the existence of an algorithm that can decide whether any program provided as an input will halt or not. The impossibility of creating such an algorithm is well known.

– Over the past few years, there has been a boom of computer systems that exhibit an intelligent behavior (such systems are mostly studied in the discipline known as AI).

The principles that are used to design these systems are closely related to classical and non-classical logic. The role of logic in social sciences must not be discarded. Logic is considered as a preferred tool in, e.g. the study of intelligent interactions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.36.38