As announced in Chapter 1’s introduction, this third chapter about logic continues where the first one stopped.
In the section “Algebraic Properties,” we’ll revisit rewrite rules to focus on certain algebraic properties of our logical connectives: identity, commutativity, associativity, distributivity, reflexivity, transitivity, idempotence, double negation, and absorption.
In the section “Quantifiers,” we’ll explore quantification. Quantification over a set is a powerful concept to bind parameters, and as such it’s important in data management. The two most important quantifiers are the universal quantifier (represented by the symbol ∀ and read “for all”) and the existential quantifier (represented by the symbol ∃ and read “there exists”). Once these two quantifiers are introduced, we’ll identify some important rewrite rules concerning quantifiers. We’ll also investigate quantification over the empty set, which is often misunderstood and therefore a source of confusion.
The section “Normal Forms” discusses normal forms in logic—not to be confused with normal forms of data (defined for database design purposes). Normal forms can help you to structure your predicates. We’ll explore the following two normal forms for predicates:
AND
.OR
.After the “Chapter Summary,” there’s another section with exercises, focusing on quantifiers and normal forms. We again strongly advise you to spend sufficient time on these exercises before moving on to other chapters.
This chapter ends the formal treatment of logic; Chapter 4 will continue with more set theory.
From regular mathematics, we know that arithmetic operations such as sum and product have certain properties, such as those shown in Listing 3-1.
-(-x) = x
x+y = y+x
(x+y)+z = x+(y+z)
x*(y+z) = (x*y)+(x*z)
Such arithmetic properties are referred to as algebraic properties. Logical connectives also have similar algebraic properties. Table 1-13 in Chapter 1 showed 19 rewrite rule examples, subdivided in certain categories. The names of those categories refer to algebraic properties; they are important enough to justify giving them more attention, so the following sections discuss them in more detail, while adding more properties.
Just as the numeric value 1
is the identity under regular multiplication (a multiplication of any given number by 1
will result in that given number), and 0
is the identity under regular addition (0
added to any number results in that number), you also have similar identities in logic, which are shown in Listing 3-2.
P ∧ TRUE ⇔ P
P ∨ FALSE ⇔ P
P ∧ FALSE ⇔ FALSE
P ∨ TRUE ⇔ TRUE
The first expression states that TRUE
is the identity with respect to AND
. The second states that FALSE
is the identity with respect to OR
. The last two expressions are also known as the two boundedness identities; they state that anything AND
ed with FALSE
is FALSE
and anything OR
ed with TRUE
is TRUE
.
An operator (or a connective) is commutative if and only if it is dyadic and its left and right operands can be interchanged without affecting the result. Listing 3-3 shows the commutative properties of some logic operators.
( P ∧ Q ) ⇔ ( Q ∧ P )
( P ∨ Q ) ⇔ ( Q ∨ P )
( P ⇔ Q ) ⇔ ( Q ⇔ P )
Conjunction, disjunction, and equivalence are commutative connectives.
If you want to repeat the same dyadic operator on a series of operands and it doesn’t matter in which order you process the occurrences of the operator, the operator is associative. Note the parentheses in the propositional forms shown in Listing 3-4.
( ( P ∧ Q ) ∧ R ) ⇔ ( P ∧ ( Q ∧ R ) )
( ( P ∨ Q ) ∨ R ) ⇔ ( P ∨ ( Q ∨ R ) )
Conjunction and disjunction are associative. Associativity is the reason why you often see that parentheses are left out in expressions with iterated conjunctions or disjunctions that involve more than two operands.
Just like commutativity and associativity, you probably remember distributivity from regular arithmetic, typically expressed as a formula like the following, which was shown at the beginning of this section:
a*(b+c) = (a*b) + (a*c)
This algebraic property shows a certain relationship between addition and multiplication; in regular arithmetic, multiplication distributes over addition. In logic, something similar holds for the relationship between conjunction and disjunction, as shown in Listing 3-5.
( P ∨ ( Q ∧ R ) ) ⇔ ( ( P ∨ Q ) ∧ ( P ∨ R ) )
( P ∧ ( Q ∨ R ) ) ⇔ ( ( P ∧ Q ) ∨ ( P ∧ R ) )
As these tautologies demonstrate, conjunction is distributive over disjunction and vice versa. Note that in regular arithmetic, addition does not distribute over multiplication.
The regular equality is an example of a reflexive operator; x = x
is TRUE
for any x
(regardless of the type of x
). In logic, the implication and equivalence connectives are reflexive, because the predicates shown in Listing 3-6 are always TRUE
.
P ⇔ P
P ⇔ P
The regular equality is transitive, as expressed by the following formula:
((x = y) ∧ (y = z)) ⇒ (x = z)
Conjunction, implication, and equivalence are transitive in logic, as shown in Listing 3-7.
( ( P ∧ Q ) ∧ ( Q ∧ R ) ) ⇒ ( P ∧ R )
( ( P ⇒ Q ) ∧ ( Q ⇒ R ) ) ⇒ ( P ⇒ R )
( ( P ⇔ Q ) ∧ ( Q ⇔ R ) ) ⇒ ( P ⇔ R )
Note that the disjunction is not transitive: (P ∨ Q) ∧ (Q ∨ R)
does not imply P ∨ R.
The laws of De Morgan (1806–1871) are quite useful for manipulating expressions involving negation, and are shown in Listing 3-8.
¬( P ∨ Q ) ⇔ ( ¬P ∧ ¬Q )
¬( P ∧ Q ) ⇔ ( ¬P ∨ ¬Q )
Using plain English, these laws describe that “the negation of a disjunction equals the conjunction of the negations (of the operands)” and vice versa “the negation of a conjunction equals the disjunction of the negations.”
Some (dyadic) operators have the property that if their operands are equal, then their result is equal to this operand. The operands are said to be idempotent under the operator. In logic, propositions are idempotent under disjunction and conjunction, as shown in Listing 3-9.
( P ∧ P ) ⇔ P
( P ∨ P ) ⇔ P
If you apply the negation twice, you should get back the original operand. In other words, the negation operates as the complement, as shown in Listing 3-10.
¬¬P ⇔ P
The absorption properties are not so well known and might be a little difficult to understand at first glance. They are listed in Listing 3-11.
( P ∨ ( P ∧ Q ) ) ⇔ P
( P ∧ ( P ∨ Q ) ) ⇔ P
The first absorption rule states that in a disjunction of two predicates (say R1 ∨ R2
) where one of the disjuncts (say the right one R2
) is stronger than the other disjunct (R1
), then R2
may be left out without affecting the truth value of the predicate.
Note Recall from the section “Predicate Strength” in Chapter 1 that a predicate R2
is stronger than a predicate R1
if and only if R2
⇒ R1
. In the first absorption rule, the right disjunct P ∧ Q
is always stronger than P
(left disjunct), because the predicate (P ∧ Q)
⇒ P
is a tautology.
Similarly, the second absorption rule states that in a conjunction of two predicates, say R1
∧ R2
, where one of the conjuncts (say the right one R2
) is weaker than the other conjunct (R1
), then R2
may be left out without affecting the truth value of the predicate.
The proof of the two absorption equivalences is left as an exercise at the end of this chapter.
Chapter 1 clarified the difference between propositions and predicates. It talked about how you can turn a predicate into a proposition. There are two ways to achieve that goal:
The first method is straightforward and is explained in Chapter 1; this section explains the second method.
To use quantification for this goal, you must first define which set of possible values a parameter of the predicate, say x
, is allowed to choose its values from. Through quantification over this set of possible values, you can then say something about how many of those possible values have a certain property (this is, where the predicate is turned into a proposition). For example, you could say that there are precisely 42 values for x
with a certain property. However, the two most common quantifying statements are as follows:
x
for which it is TRUE
that, [declarative sentence with x
].x
it is TRUE
that, [declarative sentence with x
].The first statement says something about the existence of at least one value; therefore it’s referred to as existential quantification. The second statement is known as universal quantification because you say something about all elements of a given set. You can represent the existential and universal quantifiers with two symbols, as you can see in Table 3-1.
Symbol | Description |
∃ | Existential quantifier (There exists . . .) |
∀ | Universal quantifier (For all . . .) |
Let P(x)
and Q(y)
be given predicates with parameters x
and y
, respectively. Let set A
be a given set representing the possible values for parameter x
, and set B
the same for y
. The general syntax for expressions using quantification is shown in Listing 3-12.
∃x∈A: P(x)
∀y∈B: Q(y)
The meaning of these two expressions is defined as follows:
A
that will transform predicate P(x)
into a TRUE
proposition when that value is provided as the value for parameter x
of predicate P(x)
.B
, predicate Q(y)
will transform into a TRUE
proposition when such value is provided as the value for parameter y
of predicate Q(y)
.Note Whether these two quantified expressions are TRUE
obviously depends on the sets involved (A
and B
) and the given predicates, P(x)
and Q(y)
.
We can now revisit an example proposition that was given in the section “Propositions and Predicates” in Chapter 1: all mathematicians are liars
. This proposition is actually a universal quantification. Following is a formal way of specifying this predicate:
∀x∈M: "x is a liar"
Here M
represents the set of all persons who are mathematicians. This set is used to quantify over the predicate "x is a liar"
—a predicate that contains one parameter named x
. This quantification binds parameter x
(it supplies values for this parameter). The resulting quantification is now a proposition; it effectively doesn’t have a parameter anymore.
Look at the following three examples (using the set of natural numbers that was introduced in Chapter 2):
∃x∈N: x < 42
∀y∈N: y/2 ∈ N
∀z∈N: z ≥ 0
The first and third predicate (more precisely, propositions, because they effectively don’t have a parameter) are TRUE
, the second one is FALSE
. To show that the first one—an existential quantification—is TRUE
, it’s sufficient to give a single example. In this case, we must supply a single natural number for x
that is smaller than 42
; for example, 41
is a natural number less than 42
. To show a universal quantification is FALSE
(the second one), it’s sufficient to give a single counter example; for example, 3
(an odd number) divided by 2
results in 1.5
(one and a half), which is not a natural number. For the third statement, you’ll have to check all natural numbers and come up with a single natural number not greater than or equal to zero, to show that the proposition is FALSE
. Of course, you’ll never find such a number. The smallest natural number is 0
(zero)—which is greater than or equal to 0
—therefore, the predicate is TRUE
for all natural numbers.
When a parameter of a predicate gets bound by a quantifier, the name of the parameter becomes free to choose. Let’s clarify this with an example. Consider the predicate "x lives in Utrecht"
. This predicate has one parameter that happens to have been named x
. Let’s now define a new proposition by universally quantifying this predicate across set {Toon, Lex, Chris, Cary, Hugh}
as follows:
∀x∈{Toon, Lex, Chris, Cary, Hugh}: "x lives in Utrecht"
It’s probably not too difficult to see that this proposition is equivalent to the following compound proposition (a conjunction with five conjuncts):
"Toon lives in Utrecht"
∧ "Lex lives in Utrecht"
∧ "Chris lives in Utrecht"
∧ "Cary lives in Utrecht"
∧ "Hugh lives in Utrecht"
You should realize that the original name of the parameter (x
) doesn’t influence the meaning of the universal quantification that constitutes our new proposition. If you replace parameter x
with a parameter named y
—both at the quantifier and inside the predicate that is quantified—you’ll end up with a proposition that’s logically equivalent to the preceding conjunction of five conjuncts:
∀y∈{Toon, Lex, Chris, Cary, Hugh}: "y lives in Utrecht"
You can choose names of bound parameters in quantifications (both universal and existential) arbitrarily. Sometimes it’s convenient to pick a different name for such a parameter to avoid confusion.
Because a quantified expression, like the preceding examples, is a predicate, it’s perfectly valid to use such an expression as a component of a compound predicate. That is, you can combine quantified expressions with other predicates by using the logical connectives introduced in Chapter 1. For example, the following two expressions are valid predicates (in fact, both are propositions):
( ∀x∈A: x > 0 ) ∧ ( ∃y∈B: y < 10 )
¬( ∀z∈A: z > 10 ) ⇒ ( ∃x∈A: x < 11 )
Although databases grow larger and larger these days, they are always finite by definition—as opposed to sets in mathematics, which are sometimes infinite. If you work with finite sets only, you can always treat the existential and universal quantifiers as iterated OR
and iterated AND
constructs, respectively. Iteration is done over all elements of the set from which the quantifier parameter is drawn; for each element value, the occurrences of the variable inside the predicate are replaced by that value. This alternative way of interpreting quantified expressions might help when you investigate their truth values.
For example, suppose P
is the set of all prime numbers less than 15
:
P := {2,3,5,7,11,13}
Then these two propositions
∃x∈P: x > 12
∀y∈P: y > 3
are logically equivalent with the following two propositions:
Iterate x
over all elements in P: 2>12 ∨ 3>12 ∨ 5>12 ∨ 7>12 ∨ 11>12 ∨ 13>12
Iterate y
over all elements in P: 2>3 ∧ 3>3 ∧ 5>3 ∧ 7>3 ∧ 11>3 ∧ 13>3
You could now manually compute the truth value of both predicates. If you evaluate the truth value of every disjunct, the first proposition becomes FALSE ∨ FALSE ∨ ... ∨ FALSE ∨ TRUE
, which results in TRUE
. If you do the same for the conjuncts of the second proposition, you get FALSE ∧ FALSE ∧ TRUE ∧ ... ∧ TRUE,
which results in FALSE
.
The empty set (⊘) contains no elements. Because it still is a set, you can quantify over the empty set. The following two expressions are valid propositions:
∃x∈⊘: P(x)
∀y∈⊘: Q(y)
But what does it mean to say that there exists an element x
in the empty set for which P(x)
holds? Clearly, the empty set has no elements. Therefore, regardless of the involved predicate P
, whenever we propose that there’s such an element in the empty set, we must be stating something that’s unmistakably FALSE
, because it’s impossible to choose an element from the empty set.
The second proposition—universal quantification over the empty set—is less intuitive. By treating universal quantification as an iterated AND
, you’ll notice that a universal quantification over the empty set delivers no conjunct. Such quantification adds no predicate at all, and can therefore be considered the weakest predicate possible, which is TRUE
(recall the paragraph on predicate strength in the “Truth Tables” section in Chapter 1). As you’ll see when we discuss some additional rewrite rules regarding the negation of quantifiers in the section “Negation of Quantifiers,” it makes complete sense. Table 3-2 lists these two important rewrite rules regarding quantification over the empty set.
Quantification | Evaluates to |
( ∃x∈⊘: P(x) ) |
FALSE , regardless of predicate P(x) |
( ∀y∈⊘: Q(y) ) |
TRUE , regardless of predicate Q(y) |
Quantifiers can be nested. That is, you can use multiple quantifiers in a single predicate. Let’s look at an example first. Suppose the following two sets are given:
S := {1,2,3,4}
T := {1,2}
Now look at the following four (very similar) propositions:
(∀x∈S: (∃y∈S: y = x ))
(For all x
in S
there exists a y
in S
for which y = x
)(∃y∈S: (∀x∈S: y = x ))
(There exists a y
in S
for which all x
in Sy = x)
(∀x∈S: (∃y∈T: y = x ))
(For all x
in S
there exists a y
in T
for which y = x
)(∀x∈T: (∃y∈S: y = x ))
(For all x
in T
there exists a y
in S
for which y = x
)If you compare the first two propositions, you’ll see that they refer to the set S
only; moreover, the two quantifiers are interchanged. In the last two examples, the two sets S
and T
are interchanged. Now if you check these propositions against the given two sets to investigate their truth value, you can draw the following conclusions:
TRUE
, because you can simply choose the same value for y
as the one you chose for x
.FALSE
; there is no such value y
. As soon as you’ve bound variable y
by selecting a value, there is only one choice for x
(namely the same value).FALSE
too; the counter examples are x = 3
and x = 4
.TRUE
, because T
is a subset of S
.From the first two examples in the previous section, you can draw this important conclusion: sometimes you can safely interchange nested quantifiers in your expressions, but sometimes you can’t—without changing the meaning of the expression.
Caution Nested quantifications cannot always be interchanged without risk.
You can safely interchange two successive quantifiers in a formula if the following two conditions are met:
For example, the following two expressions are tautologies:
∀x∈A: ∀y∈B: P(x,y) ⇔ ∀y∈B: ∀x∈A: P(x,y)
∃x∈A: ∃y∈B: Q(x,y) ⇔ ∃y∈B: ∃x∈A: Q(x,y)
However, these two are not tautologies:
∀x∈A: ∃y∈B: P(x,y) ⇔ ∃y∈B: ∀x∈A: P(x,y)
∀x∈A: ∀y∈(B∪{x}): Q(x,y) ⇔ ∀y∈(B∪{x}): ∀x∈A: Q(x,y)
The last example is rather tricky; remember from Chapter 2, the symbol ∪ stands for the union operator. So as soon as you have chosen a value for x
, you add that value to set B
before you try to find a value y
for which Q(x,y)
is TRUE
. Note that the expression even becomes illegal if you interchange the quantifiers, because now you have a reference to the variable x
in ∀y∈(B∪{x}) before x
is introduced in the second quantifier. To be more precise, the expression is not illegal, but rather confusing. If you bind a variable by quantification, the scope of that variable is the quantified expression; the variable is unknown outside the quantified expression. In other words, in the expression ∀y∈(B ∪{x}): ∀x∈A: Q(x,y)
the first x
is not the same as the other ones. So, you might as well say ∀y∈(B∪{x}): ∀z∈A: Q(z,y)
. But then, this expression is not a proposition but a predicate, because it contains a free variable x
.
If you have two consecutive quantifiers of the same type over the same set, it’s common to abbreviate your formulas as follows:
∃x∈N: ∃y∈N: x + y > 42
The preceding existential quantification over the same set becomes
∃x,y∈N: x + y > 42
And likewise for universal quantification:
∀x∈N: ∀y∈N: x + y > 42
The preceding universal quantification over the same set becomes
∀x,y∈N: x + y > 42
Listing 3-13 shows the distributive properties of the universal and existential quantifiers.
( ∃x∈S: P(x) ∨ Q(x) ) ⇔ ( ( ∃x∈S: P(x) ) ∨ ( ∃x∈S: Q(x) ) )
( ∀x∈S: P(x) ∧ Q(x) ) ⇔ ( ( ∀x∈S: P(x) ) ∧ ( ∀x∈S: Q(x) ) )
( ∃x∈S: P(x) ∧ Q) ⇔ ( ( ∃x∈S: P(x) ) ∧ Q)
( ∀x∈S: P(x) ∨ Q) ⇔ ( ( ∀x∈S: P(x) ) ∨ Q)
Note that in the last two examples in Listing 3-13, predicate Q
is not written as Q(x)
; this denotes that predicate Q
does not have a parameter x
. You can use the last two logical equivalences only if x
does not occur as a parameter in predicate Q
; for example:
( ∀y∈S: ( ∃x∈S: x>10 ∧ y<5 ) )
⇔ ( ∀y∈S: ( ∃x∈S: x>10 ) ∧ y<5 )
⇔ ( ∀y∈S: ( ∃x∈S: x>10 ) ) ∧ ( ∀y∈S: y<5 )
By the way, do you see that the left conjunct in the last equation is of the form ( ∀y∈S: Q )?
More specifically, it represents a universal quantification over a proposition; it doesn’t quantify over a predicate whose parameter gets bound by the quantifier. Something similar can occur with an existential quantification. Take a look at Listing 3-14, which introduces two rewrite rules for these situations.
( ∀x∈S: P ) ⇔ ( ( S = ⊘) ∨ P )
( ∃x∈S: Q ) ⇔ ( ( S ≠ ⊘) ∧ Q )
Here P
and Q
represent propositions. Using these rewrite rules, you can further rewrite the preceding example into the following expression:
( ( S = ⊘ ) ∨ ( ∃x∈S: x>10 ) ) ∧ ( ∀y∈S: y<5)
If you use the negation connective in combination with a universal quantifier, the following happens:
P
is not TRUE
for all elements x
of S
.P
is not TRUE
; in other words:(∃x∈S: ¬P(x))
Likewise, adding the negation to an existential quantification results in the following:
¬(∃x∈S: P(x))
means that there is no element x
of S
for which P
is TRUE
.P
must be FALSE
for all values x
in S
; in other words:(∀x∈S: ¬P(x))
So we have two important quantifier rewrite rules, as shown in Listing 3-15.
¬( ∀x∈S: P(x) ) ⇔ ( ∃x∈S: ¬P(x) )
¬( ∃x∈S: P(x) ) ⇔ ( ∀x∈S: ¬P(x) )
Do you remember the discussion about quantification over the empty set in the previous section “Quantification Over the Empty Set” (see Table 3-2)? We don’t want the empty set to cause any exceptions to the rules. Therefore
(∃x∈⊘: P(x)) ⇔ FALSE
, we can transform this into ¬(∀x∈⊘: ¬P(x)) ⇔ FALSE
(using the first rewrite rule in Listing 3-15)¬¬(∀x∈⊘: ¬P(x)) ⇔ TRUE
(negation of predicates at both sides of equivalence)(∀x∈⊘: ¬P(x)) ⇔ TRUE
, regardless of the predicate (strike out the double negation in front of the universal quantifier)So you see it’s logically correct that universal quantification over the empty set always results in TRUE
, regardless of the actual predicate after the colon.
If you’re still not fully convinced, here’s an alternative reasoning. The universal quantifier corresponds with an iterated AND
construct, as explained in the paragraph on quantifiers in section “Quantifiers and Finite Sets.” Therefore, you can set up the following series of logical equivalences:
∀x∈S: P(x)
⇔ P(x1) ∧ P(x2) ∧ P(x3) ∧ ... ∧ P(xn-1) ∧ P(xn)
⇔ TRUE ∧ P(x1) ∧ P(x2) ∧ P(x3) ∧ ... ∧ P(xn-1) ∧ P(xn)
The prefix TRUE ∧
is allowed according to one of the “special case” rewrite rules listed in Table 1-12. As you can see, you start with a set S
containing n
values. Now if you start removing the elements of set S
one by one, you can remove ∧ P(xn)
from the end, followed by ∧ P(xn-1)
, and so on, until you will eventually end up with the empty set—because you removed the last element of set S
. However, then the equivalence is reduced to ∀x∈⊘: P(x)
which is equivalent to TRUE
.
The previous sections already introduced a few rewrite rules with regard to quantification. Listing 3-16 lists the most commonly used rewrite rules for quantifiers.
¬( ∃x∈S: P(x) ) ⇔ ( ∀x∈S: ¬P(x) )
¬( ∀x∈S: P(x) ) ⇔ ( ∃x∈S: ¬P(x) )
( ∃x∈S: P(x) ) ⇔ ¬( ∀x∈S: ¬P(x) )
( ∀x∈S: P(x) ) ⇔ ¬( ∃x∈S: ¬P(x) )
( ∀x∈A: ( ∀y∈B: P(x,y) ) ) ⇔ ( ∀y∈B: ( ∀x∈A: P(x,y) ) )
( ∃x∈A: ( ∃y∈B: Q(x,y) ) ) ⇔ ( ∃y∈B: ( ∃x∈A: Q(x,y) ) )
( ∃x∈S: P(x) ∨ Q(x) ) ⇔ ( ( ∃x∈S: P(x) ) ∨ ( ∃x∈S:Q(x) ) )
( ∀x∈S: P(x) ∧ Q(x) ) ⇔ ( ( ∀x∈S: P(x) ) ∧ ( ∀x∈S: Q(x) ) )
( ∃x∈S: P(x) ∧ Q) ⇔ ( ( ∃x∈S: P(x) ) ∧ Q )
( ∀x∈S: P(x) ∨ Q) ⇔ ( ( ∀x∈S: P(x) ) ∨ Q )
¬( ∀x∈S: ∀y∈T: P(x,y) ) ⇔ ( ∃x∈S: ∃y∈T: ¬P(x,y) )
¬( ∃x∈S: ∃y∈T: P(x,y) ) ⇔ ( ∀x∈S: ∀y∈T: ¬P(x,y) )
¬( ∀x∈S: ∃y∈T: P(x,y) ) ⇔ ( ∃x∈S: ∀y∈T: ¬P(x,y) )
¬( ∃x∈S: ∀y∈T: P(x,y) ) ⇔ ( ∀x∈S: ∃y∈T: ¬P(x,y) )
The third and fourth rewrite rules in Listing 3-16 state that you can rewrite a universal quantification into an existential quantification (with the use of a negation connective) and vice versa. This means that you don’t need one of the quantifiers to express any arbitrary quantified predicate. However, it’s convenient to have the two quantifiers available, because we use both in natural language.
You’ll prove one of these rewrite rules in the exercises at the end of this chapter.
Normal forms (also referred to as canonical forms) are often used in mathematics. The English word “normal” here is being used in the sense of conforming to a norm (that is a standard) as opposed to the meaning “usual.”
Normal forms are typically used to rewrite expressions in such a way that you can solve (or further analyze) them easily. A characteristic example is the well-known normal form for quadratic equations: ax2 + bx + c = 0
. Once you have rewritten your quadratic equations to this format, you can use the quadratic formula to find the values for x
. In logic, there are similar normal forms; we’ll discuss two of them in the sections that follow. In general, normal forms are useful in automated theorem proving. The disjunctive and conjunctive normal forms discussed in the following sections are particularly useful to verify whether a predicate is a tautology or a contradiction.
Note Normal forms are also useful for comparing two predicates. By rewriting two seemingly similar predicates into the same normal form, you can examine more closely what exactly the difference is between the two. Or maybe after having transformed both into a normal form, you discover that they are in fact the same predicate. This is one of the applications in Part 2 of this book when data integrity rules are specified as formal predicates.
A predicate is in conjunctive normal form (CNF) if it consists of one or more members separated by AND
; that is, if the predicate, say P-CNF
, has the following format:
P-CNF := C1 ∧ C2 ∧ C3 ∧ ... ∧ Cn
In this formula, C1
, C2
, ... Cn
are known as the conjunction members—or just conjuncts for short. Each conjunction member is an iterated OR
of simple predicates only (note that zero iteration is allowed too, which results in a conjunction member containing no invocation of OR
); a simple predicate is a predicate that cannot be further decomposed into a disjunction or conjunction (that is, it cannot be a compound predicate). These simple predicates may optionally be preceded by a negation connective. Note that the set {∨, ∧, ¬}
is functionally complete, as explained in Chapter 1; therefore, you don’t need other connectives to express any arbitrary predicate. Also, as we’ll demonstrate shortly, CNF is always achievable for any arbitrary predicate.
CNF in full detail looks as shown in Listing 3-17.
P-CNF := (SP11 ∨ SP12 ∨ SP13 ∨ ... ∨ SP1a) ∧
(SP21 ∨ SP22 ∨ SP23 ∨ ... ∨ SP2b) ∧
(SP31 ∨ SP32 ∨ SP33 ∨ ... ∨ SP3c) ∧
... ∧
(SPn1 ∨ SPn2 ∨ SPn3 ∨ ... ∨ SPnd)
In Listing 3-17, each simple predicate SPij
is allowed to be prefixed with a negation connective. It’s easy to check whether a proposition in CNF is a tautology, because as soon as one of the conjunction members is FALSE
, the proposition becomes FALSE
. In other words, you can check the conjunction members one by one. Moreover, because all conjunction members of a predicate in CNF must be an iterated OR
with simple predicates only, the tautology check can also be formulated as follows:
A CNF predicate is a tautology (always
TRUE
) if and only if each conjunction member contains a simple predicateP
and its negation.
Listing 3-18 shows four examples of predicates in CNF (P
, Q
, R
, and S
are simple predicates).
P
P ∨ Q
P ∧ Q
(P ∨ Q ∨ ¬R) ∧ (P ∨ ¬R)
(P ∨ Q ∨ ¬P) ∧ (Q ∨ ¬Q) ∧ (¬R ∨ R ∨ S)
The first predicate has one conjunction member that consists of one simple predicate. The second predicate has one conjunction member that happens to be a disjunction of two simple predicates; the third and the fourth examples consist of two conjunction members. The fifth example is a tautology; note that the three conjunction members contain the pairs of simple predicates (P, ¬P), (Q, ¬Q)
, and (R, ¬R)
, respectively.
A predicate is in disjunctive normal form (DNF) if it consists of one or more members separated by OR
; that is, if the predicate, say P-DNF
, has the following format:
P-DNF := D1 ∨ D2 ∨ D3 ∨ ... ∨ Dn
In this formula, D1
, D2
, ... , Dn
are known as the disjunction members—or just disjuncts for short. Each disjunction member is an iterated AND
with simple predicates only (zero iteration allowed). So DNF in full detail looks as shown in Listing 3-19.
P-DNF := (SP11 ∧ SP12 ∧ SP13 ∧ ... ∧ SP1a) ∨
(SP21 ∧ SP22 ∧ SP23 ∧ ... ∧ SP2b) ∨
(SP31 ∧ SP32 ∧ SP33 ∧ ... ∧ SP3c) ∨
... ∨
(SPn1 ∧ SPn2 ∧ SPn3 ∧ ... ∧ SPnd)
In Listing 3-19, each SPij
is allowed to be prefixed with a negation connective. Compound predicates in DNF are easy to check for contradictions. For a predicate in DNF to be a contradiction, all disjunction members must be FALSE
—because the predicate is written as an iterated OR
construct. Following the same reasoning as we did for predicates in CNF, we can say the following about predicates in DNF:
A DNF predicate is a contradiction if and only if each disjunction member contains a simple predicate P and its negation.
See Listing 3-20 for examples of predicates in DNF.
P ∨ Q
P ∧ Q
(P ∧ Q ∧ ¬R) ∨ (S ∧ ¬R)
(P ∧ Q ∧ ¬P) ∨ (R ∧ ¬R ∧ S)
The first two examples are in DNF as well as in CNF (compare Listing 3-18); the second example consists of one disjunction member that happens to be a conjunction of two simple predicates; the third and last example consists of two disjunction members. The last example is also a contradiction.
In this section we’ll investigate how you can transform a given predicate into DNF or CNF.
There are several methods to rewrite existing predicates into DNF. Normally, you try to use existing rewrite rules to move your expression into the “right” direction until it has the desired format: an iterated OR
where the disjuncts are iterated AND
s with simple predicates only.
An interesting alternative technique is based on using a truth table. For example, suppose this is the original predicate:
P ⇒ ( Q ⇔ ¬P )
Table 3-3 shows the corresponding truth table.
As you can see from the last column of Table 3-3, the given predicate is TRUE
for the last three truth combinations of (P,Q)
: (t,f)
, (f,t)
, and (f,f)
. You can represent those three truth combinations with the following corresponding conjunctions:
(t,f): C1 := P ∧ ¬Q
(f,t): C2 := ¬P ∧ Q
(f,f): C3 := ¬P ∧ ¬Q
Now, if you combine these three conjunctions into the iterated disjunction C1 ∨ C2 ∨ C3, you end up with a rewritten predicate in DNF that precisely represents the truth table of the original predicate:
( P ∧ ¬Q ) ∨ ( ¬P ∧ Q ) ∨ ( ¬P ∧ ¬Q )
If you aren’t convinced, set up a truth table for the rewritten predicate and compare the result with the original predicate in Table 3-3. By the way, did you notice that the last column of Table 3-3 is identical to the definition of the NAND
operator? The rewritten expression in DNF precisely represents the NAND
operator; the expression is FALSE
if and only if both operands are TRUE
.
Note This DNF rewrite algorithm using a truth table is relatively easy to automate. You could write a program that does the conversion for you, without the need for inspiration and creativity; DNF is always achievable.
FUNCTIONAL COMPLETENESS REVISITED
To find the CNF for a given predicate—which is also always achievable—you first apply the preceding method using the truth table to find the DNF of the predicate. You then apply the distributive laws to convert the iterated disjunction into an iterated conjunction. Here’s an example derivation that converts the predicate ( P ∧ Q ) ∨ ( R ∧ S )
, which is in DNF, into CNF:
( P ∧ Q ) ∨ ( R ∧ S )
⇔ ( P ∨ ( R ∧ S ) ) ∧ ( Q ∨ ( R ∧ S ) )
⇔ ( P ∨ R ) ∧ ( P ∨ S ) ∧ ( Q ∨ R ) ∧ ( Q ∨ S )
We’ll revisit CNF in Part 2 of this book when we discuss specification and implementation guidelines for data integrity constraints.
This section contains a summary of the contents of this chapter, formatted as a bulleted list—just like the previous chapter. Again, if you don’t feel comfortable about one of the following concepts, you might want to revisit certain chapter sections before continuing with the exercises in the next section.
∃
.∀
.OR
construct; similarly, you can interpret predicates with a universal quantifier as an iterated AND
construct.FALSE
, regardless of the predicate, whereas the universal quantifier will always return TRUE
when applied to the empty set.AND
. CNF is useful when you are investigating tautologies.OR
. DNF is useful when checking for contradictions.P ∨ ( P ∧ Q ) ⇔ P
P ∧ ( P ∨ Q ) ⇔ P
A := {2,4,6,8}
and B := {1,3,5,7,9}
. Which of the following propositions are TRUE
?
For the propositions that are FALSE
, could you come up with different definitions for sets A
or B
in such a way that they become TRUE
?
A
is divisible by 2
.B
is less than 9
.A
contains three different numbers of which the sum is 18
.( P ⇒ ( Q ∧ ¬R))
Hint: First, try to find the DNF of the negation of the preceding predicate; when found, apply one of De Morgan’s laws.
¬( ∃x∈S: ∀y∈T: P(x,y) ) ⇔ ( ∀x∈S: ∃y∈T: ¬P(x,y) )
S
be a given finite set and P(x)
a given predicate. Complete the following expressions in such a way that they are transformed into tautologies:
3.141.3.175