Intro to Logic and Computation

Navaz Alani
Spring 2020


Introduciton

What is logic? For Aristotle, logic was the instrument by means of which we come to know anything. Aristotelian logic formalizes basic principles of good reasoning and provides a way to evaluate specific cases of reasoning.

A syllogism is a kind of logical argument in whcih one proposition (the conclusion) is inferred from two or more others (the premises) of a specific form. Here is an example of an Aristotelian syllogism:

These are the premises:

  1. All humans are mortal
  2. Socrates is a human

Here is the inference/conclusion

\rightarrow Socrates is mortal

This is an example of a good reasoning/argument because it is truth perserving i.e. if the premises 1 and 2 are true, then the conclusion must also be true. The correctness of the argument depends purely upon its form, not content. The form can be viewed as the high-level structure of the argument.
Another one of the Aristotelian syllogisms:

  1. All Accords are Hondas
  2. All Hondas are Japanese

\rightarrow All Accords are Japanese

As you can see, the high-level structure of this argument is the same, which is why it holds as long as the propositions are true. The general form of these hypothetical syllogisms is:

  1. All xx are yy
  2. All yy are zz

\rightarrow All xx are zz

This is an example of a good argument. What about this one?

  1. All xx are yy
  2. Some yy are zz

\rightarrow Some xx are zz

This is actually a bad argument. It is not truth preserving in that the 2 initial premises could be true but the conclusion may not necessarily. Try find a case under which this argument does not hold i.e. find two premises which are both true but the resulting conclusion is not.

Significance of Logic (in CS)

  • In the heart of all electronic computers, there are logic gates. Digital circuits are formed of logic gates. The study of logic can help minimize the number of components in electronic circuits. * AI- in specific, expert systems which are essentially formed of an inference engine running on a knowledge base. An example is DENDRAL (by Stanford University in the 1960s) which was an expert system to aid the identification of unknown organic molecules.
  • Automated theorem proving/automated proof verifiers
  • Databases- the core of modern database systems (e.g. SQL) uses first order logic.
  • Programming- program specification, formal verification
  • DNA computing
  • Synthetic biology

Propositional Logic

  • In this context, we define logic to be the analysis and appraisal of arguments.
  • An argument is a set of statements, namely one or several premises and a and a conclusion, usually connected by a "therefore". You can see that this definition is made with respect to the high-level structure of an argument. Consider the following argument:
  1. No pure water is burnable
  2. Some Cuyahoga river water is burnable

\rightarrow Some Cuyahoga river water is not pure

This is a valid argument, which is one in which whenever the premises are true, the conclusion is also true. Convince yourself that this argument holds.

  • Logic essentially studies the "forms" of reasoning (the high-level structure). As mentioned before, the content of the arguments may deal with anything.
  • Learning logic is learning the tools of reasoning applicable to any discipline.

Have a look at this argument:

  1. No pure water is burnable
  2. Some Cuyahoga river water is not burnable

\rightarrow Some Cuyahoga river water is pure

This argument is invalid (incorrect/unsound). There could be a case where the whole river is polluted by non-burnable pollutant- it works under the assumption that all unburnable water is pure. Here is another one:

  1. If demand rises, companies expand
  2. If companies expand, companies hire more workers

\rightarrow If demand rises, companies hire more workers.

NOTE: One may argue against the conclusion but as soon as the premises are accepted to be true, the conclusion must also be true. In such cases, the conclusion is said to logically follow from the premises, or that the argument is valid (correct/sound).

Another argument:

  1. This computer program has a bug or the input is erroneous
  2. The input is not erroneous

\rightarrow This computer program has a bug

Compoud statements consist of several parts, each of which is a statement
in its own right.

  • "Demand rises" and "companies expand" are connected by if ..., then ...
  • "This computer program has a bug" and "input is erroneous" are connected by
    ... or ...

Some Important Logical Arguments

To analyze arguments for correctness we abbreviate the essential statements by
substituting letters p,q,rp,q,r and so on.

  • pp may represent "demand rises"
  • qq may represent "companies expand"
  • rr may represent "companies hire"

The argument then becomes:

  1. If pp, then qq
  2. If qq, then rr

\rightarrow If pp, then rr

This kind of argument is called a hypothetical syllogism and it is a valid
argument (it is truth preserving/the reasoning is sound).

The "computer bug" example has this high-level structure:

  1. pp or qq
  2. qq

\rightarrow pp

This kind of argument is called a disjunctive syllogism. The word disjunction
and its derivatives refer to something to do with the connector or.

Here's another argument which is called modus ponens.

  1. If pp, then qq
  2. pp

\rightarrow qq

Concretizing Propositional Logic

Definition A statement that is either true or false is called a
proposition. From the previous series of analysis:

  • p,q,rp,q,r are called propositional variables
  • True (T) and False (F) (or 0 or 1 respectively) are called
    propositional constants
  • Any propositional variable can be assigned to the value 0 or 1
  • Propositional variables are atomic propositions i.e. they annot be further
    subdivided
  • Compound statements are obtained by combining several atomic propositions.
  • The function of the structures ... or ..., ... and ..., not ...,
    if..., then ... is to combine propositions and are therefore called
    logical connectives

Statements formulated in natural language are frequently ambiguous because
the words can have more than one meaning. We want to avoid this so we introduce
new mathematical symbols to take the role of connectives.

Conventions: Stating a proposition in English rightarrow that it is True.

"It is true that cats eat fish" = "cats eat fish"

Similarly, if pp is a proposition, then "pp" means "pp is True" or "pp holds".
This should justify the syntax used in the arguments above.

NEGATION (Definition) Let pp be a proposition. The compound proposition
¬p\lnot p, pronounced as "not pp" is the proposition that is True when pp is
False and that is False when pp is True.

  • ¬p\lnot p is called the logical negation pp
  • The connective ¬\lnot can be translated into English as it is not the case that ... or simply not...not ...

Here is the truth table for negation:

pp (¬p\lnot p)
0 1
1 0

CONJUNCTION: (Definition) Let pp and qq be two propositions, then pqp\wedge q
is True if and only if both pp and qq are True.

  • pqp\wedge q is called the logical conjunction of pp and qq
  • The connective \wedge is pronounded "and" and may be translated to the English
    word "and"

Truth table for conjunction:

pp qq (pqp\wedge q)
1 1 1
1 0 0
0 1 0
0 0 0

Obeservations:

  • In English, we usually take shortcuts that are not allowed in logic statements.
    "He eats and drinks" really means "he eats and he drinks".
  • In logic, every statement must have its own subject and its own predicate.
    Taking p=p = "He eats" and q=q = "he drinks", our sentence becomes pqp\wedge q
  • Sometime, we use words other than "and" to denote a conjunction: "but", "in
    addition to" and "moreover"
  • Not all instances of "and" are for conjunction: "Jack and Jill are cousins".
    In this case "and" is not a conjunction at all.

DISJUNCTION: (Definition) Let pp and qq be propositions. Then, pqp \lor q
is False if and only if both pp and qq are True. If pp, qq or both are True,
then pqp\lor q is True.

  • pqp\lor q is the logical (inclusive) disjunction of pp and qq.
  • The connective \lor is pronounced as "or" and can be translated to English
    by "or".

Truth table for (inclusive) disjunction:

pp qq (pqp\lor q)
1 1 1
1 0 1
0 1 1
0 0 0

Observations:

The English "or" can have 2 meanings:

  • "Exclusive or". "You can have either soup or salad" means you can have
    "soup" or "salad", but not both.
  • "Inclusive or". "The computer program has a bug or the input is erroneous".
    The computer program could have a bug, the input could be erroneous, or both.

Note: When performing a disjunction of two sentences, make sure that the
sentences are complete: each should have its own subject and predicate.
The predicate is the part of a sentence which contains the verb and states
something about the subject. This helps avoid any ambiguity.

"There was an error on line 15 or 16" must first be expanded to "There was an
error on line 15 or there was an error on line 16". This helps make sure that
the propositions are valid.

IMPLICATION: (Definition) Let pp and qq be two prepositions. Then
pqp\rightarrow q is False if pp is True and qq is False and pqp\rightarrow q is
True otherwise.

  • pqp\rightarrow q is called the logical implication of pp and qq
  • pqp\rightarrow q may be translated to English using if ..., then ... to
    "it is not the case that pp is True and qq is False".
  • pp is called the antecedent
  • qq is called the consequent

Truth table for implication:

pp qq (pqp\rightarrow q)
1 1 1
1 0 0
0 1 1
0 0 1

Observations:

  • If pp is False, then qq is vacuously True since in such a case, the
    verification of pqp\rightarrow q does not require anything to deduce qq from pp.
    More formally, vacuous truth is when an implication is true, only because
    its antecedent cannot be satisfied.

The following are logically equivalent:

  • pqp\rightarrow q
  • If pp, then qq
  • Whevener pp, then qq
  • pp is sufficient for qq
  • pp only if qq
  • pp implies qq
  • qq if pp
  • qq whenever pp
  • qq is necessary for pp
  • qq is implied by pp

EQUIVALENCE: (Definition) Let pp and qq be propositions. Then p    qp \iff q
is True whenever pp and qq have the same truth values.

  • p    qp\iff q is called the logical equivalence/biconditional of pp and qq
  • It is pronounced "pp if and only if qq". Usually, "if and only if" is
    abbreviated as "iff".
pp qq (p    qp\iff q)
1 1 1
1 0 0
0 1 0
0 0 1

Sorting out Equivalence and Implication: There is a difference and it is not
always clear which connective is intended in English. Have a look:

"Eating hanburgers at a fast food restaurant is equivalent to aiding the
destruction of the world's rainforests"

This looks like an equivalence, but if you swapped the two propositions,
clearly something is wrong. The intended meaning could have been:

"If one eats hamburgers at a fast food restaurant, then one is aiding the
destruction of the world's rainforests"

Remarks on Logical Operators

  1. ¬\lnot is the only unary connective i.e. it acts on one proposition
  2. All other connectives are binary i.e. acting on two propositions
  3. ,,    \land, \lor, \iff are symmetric. The order of propositions does not matter.
    pqp\land q is an equivalent statement to qpq\land p.
  4. \rightarrow is not symmetric. pqp\rightarrow q and qpq\rightarrow p have different
    truth values.

Ambiguity and Imprecision

Logic helps clarify the meaning of descriptions e.g. in English. After all, we
even use it to state, precisely, the requirements of computer systems. Natural
language definitions can be ambiguous or imprecise. What's the difference?

  • Ambiguity is when a sentence has more than one meaning.

Take a look at this sentence: "David and John from Toronto are coming for a
visit". Well, who is from Toronto? David, John or both? More information is
needed. This statement is ambiguous. Another one: "I know a much funnier man
than Bill". Could it mean "I know a much funnier man than Bill does" or "I know
a much funnier man than Bill is"- it is ambiguous.

Ambiguity is dealt with by querying the author of the sentence or by examining
context.

  • Imprecision (Vagueness) is when a sentence has only one meaning but as a
    proposition, the distinction between circumstances where under which it is True
    and circumstances under which it is False are not clearly defined.

Take a look at this sentence: "John is tall". But what exactly does this mean?
A more precise sentence could be "John is 2 meters tall". Another one: "This
computer is fast" can be made more precise: "This computer can execute 2
million instructions per second".

Imprecise sentences, caused by qualitative descriptions, can be made precise
by introducing quantitative measures.


Propositional Language

Here's the general idea behind the formal propositional language.

  • Using connectives, one can combine propositions (atomic or compound)
  • Ambiguity is prevented through fully parenthezised formulae which are parsed
    uniquely
  • When possible, parentheses are dropped in favor of precedence rules

We shall construct the LpL^p, which is the formal language of propositional
logic. We start off by defining the syntax of LpL^p in which terminology is
introduced. Then, formulas in LpL^p are discussed. Equipped with these two,
we work on some proofs in order to demonstrate the uniqueness of the LpL^p
language.

LpL^p Language Specifications

LpL^p is the formal language of propositional logic and it is composed of the
following symbols:

  • Propositional variables: p,q,r,p,q,r,\dots (with or without subscripts) represent
    propositions, as previously discussed
  • Logical Connectives: ¬,,,,    \lnot, \land, \lor, \rightarrow, \iff as previously
    discussed

The following are laguage specifications, with introduction of terminology:

  • Expressions: finite length strings of symbols in LpL^p. All of
    p,pq,(r),(pq)p, pq, (r), (p\land \rightarrow q) are expressions in the LpL^p language.
  • The Length of an expression is defined to be the number of symbols in it
    • An empty expression is one which has length 0. It is denoted by
      ϵ\epsilon, and is not to be mistaken with the empty set, \emptyset.
  • Two expressions UUand VV are equal iff they are of the same length and
    have the same symbols in the same order.
  • Expressions are scanned from left to right, like reading English.
  • Concatenation of two expressions UU and VV (in that order) is denoted
    by UVUV. Note that ϵU=Uϵ=U\epsilon U = U \epsilon = U.
  • If U=W1VW2U = W_1VW_2 where U,V,W1,W2U, V, W_1, W_2 are expressions, then we define
    • VV is a segment of UU
    • A proper segement of UU is defined to be the case when VUV\neq U and
      VV is non-empty
  • If U=VWU=VW, where U,V,WU,V,W are expressions, then we define
    • VV to be an initial segement (prefix) of UU
    • VV to be an terminal segement (suffix) of UU

Syntax: Well-formed Formulae

Definition: An atom (atomic formula) is an expression consisting of a
prepositional variable only. The set of atomic formulae in LpL^p is denoted by
Atom(Lp)Atom(L^p).

Definition: An expression in LpL^pis called a formula (a well-formed
propositional formula) iff it can be constructed according to the following
rules of formation:

  1. Every atomic formula is a formula
  2. If AA is an atomic formula, then so it (¬A)(\lnot A)
  3. If A,BA,B are formulae, then so are
    (AB),(AB),(AB),(A    B)(A \land B), (A \lor B), (A \rightarrow B), (A \iff B)

the set of all formulae in LpL^p is denoted by Form(Lp)Form(L^p). It is the smallest
class of expressions of LpL^p that contains Atom(Lp)Atom(L^p) and is closed under the
formation rules of formulae.

Example: Generating formulae.

Let F=((pq)((¬p)    (pq)))F = ((p\lor q) \rightarrow ((\lnot p) \iff (p \land q))) be an element in
Form(Lp)Form(L^p) and p,q,rp,q,r are elements in Atom(Lp)Atom(L^p). How is FF generated from
the rules of formation?

By rule 1, the atomic propositional variables are themselves formulae.
With these propositional variables, one can form the formulae A=(pq)A = (p\lor q),
B=(pq)B = (p\land q), using rule 3. Another formula, C=(¬p)C = (\lnot p) can be formed
by rule 2 applied to pp. Then, D=(C    B)D = (C \iff B) can be formed by rule 3.
Finally, F=(AD)F = (A \rightarrow D) can be formed using rule 3 yet again.

As you can see, a formula in LpL^p can be built from the bottom up, using the
rules of formation.

Example: Parsing formulae

Take the following complex sentence in the English language

"If Michelle wins at the olympics, everyone will admire her and she will get
rich, but if she does not win, all her effort was in vain."

To translate this sentence to the language of propositional logic, we first
isolate the atomic propositions in the sentence (notice how each proposition
is sure to have a subject and an action):

  • p=p = Michelle wins at the Olympics
  • q=q = Everyone admires Michelle
  • r=r = Michelle will get rich
  • s=s = Michelle's effort was in vain

Then, the compound sentence (compound proposition) above can be expressed as
follows:

((p(qr))((¬p)s))((p\rightarrow (q\land r)) \land ((\lnot p) \rightarrow s))

Parse Trees: A parse tree can be used to analyse formulae too. Have a look
at the parse tree of ((p(¬q))r)((p \land (\lnot q)) \rightarrow r) (Forgive my directory
structure rendering of a parse tree).

((p(¬q))r)((p \land (\lnot q)) \rightarrow r)

├─(p(¬q))├─ (p\land (\lnot q))
├─p│\hspace{1em} ├─ p
│\hspace{1em} │
(¬q)│\hspace{1em} \hspace{2px}↳ (\lnot q)
q│\hspace{3em} \hspace{2px}↳ q

r\hspace{2px}↳ r

Question: Can a formula be of two (or more) kinds?
For example, can a formula be both a conjunction and an implication?
This is important when it comes to understanding the place for ambiguity in
LpL^p.

Here are some claims about LpL^p that we intend to prove, we shall refer to
them as ()(\clubs).

  • Every formula in LpL^p has the same number of occurences of left and right
    parentheses.
  • Any proper initial segment of a formula in LpL^p has more occurrences of left
    than right parentheses. Any proper terminal segment of a formula in LpL^p
    has fewer occurrences of left compared to right parentheses.
  • Neither a proper initial segment/proper terminal segment can be a formula in
    LpL^p.

()(\clubs) are all claims which help one prove an even greater claim- the one
which we are interested in.

Unique Readability Theorem Every formula in LpL^p is exactly one of the
six forms: an atom, (¬A)(\lnot A), (AB)(A\land B), (AB)(A \lor B), (AB)(A\rightarrow B)
and (A    B)(A\iff B), and it is so in exactly one way.

This theorem obliterates questions about ambiguity of formulae in LpL^p.
But how can we prove such claims? Let us get some guidance from
mathematical induction. But first, a review of the natural numbers.

Natural Numbers

The natural numbers are the set of numbers that we use to count, namely
0, 1, 2, 3, 4 and so on. These numbers form an unbounded sequence. They
include 0 because it is the number with which a count actually starts- one
starts counting at 0 and increases it with each object/entity counted.

Suppose PP names a property. We write 'P(2)P(2)' to mean that that '2 has the
property PP' or 'PP holds for 2'.

A statement 'Every natural number has property PP' actually corresponds to
a set of statements: P(0),P(1),P(2),P(0), P(1), P(2), \dots.

Mathematical Induction

Here is the essential principle of mathematical induction. Suppose that we
establish two things:

  • the natural number nn has property PP (BASIS)
  • when a natural number has property PP, the next natural number also has
    property PP (INDUCTIVE STEP)

\rightarrow We may conclude that every natural number greater than or equal to
nn has property PP.

Why does this work? Well, one needs to understand the definition of natural
numbers. The natural numbers are defined recursively (in a self-referential
manner). At a high level: let the nn-th natural number be denoted by NnN_n.
Define N0=0N_0 = 0 (let the first natural number be 0), and then for n1n \geq 1,
we define Nn=1+Nn1N_n = 1 + N_{n - 1} (define every natural number after the first to
be 1 plus the previous natural number). Now of course, no one would take this
definition seriously as the natural numbers are used to define themselves- the
actual definition of the natural numbers involves sets and an idea similar to
the one presented above.

So, if the first natural number has a property PP, and for any number that has
the property PP, the next number has property PP too, here's what happens.
Since PP holds for 0, it starts a domino effect- the next one, 1, also has
property PP, and therefore 2 has property PP and so on...

Takeaways: Here's what we can take away from the above insight into
natural numbers and mathematical induction.

  • To talk about something, give it a name e.g. a property PP, a number kk
  • A formula is a textual object. In this object, we can substitute one symbol
    or expression for another.
  • The induction principle describes a method for proof consisting of two parts
    • The basis and inductive steps
    • In the inductive step, hypothesize P(k)P(k) (this is called the
      inductive hypothesis) and prove P(k+1)P(k+1) using it.

NOTE: The process of induction does not specify how to use the inductive
hypothesis to prove the next case, or how to prove the basis. In each problem,
these steps will be different.

Induction for LpL^p

Suppose one wants to prove that every formula in LpL^p property PP. Well, it
turns out that we cannot use induction above because a formula is not a natural
number. However, we can manipulate the situation to approach the proof
using an induction based solution. We can choose to prove any of the following:

  • For all natural numbers nn, every formula with nn or fewer symbols in it has
    property PP.
  • For all natural numbers nn, every formula with nn or fewer connectives has
    property PP.
  • For all natural numbers nn, every formula whose parse tree has height nn or
    less has property nn.
  • For all natural numbers nn, every formula reproducible with nn or less
    applications of the formation rules has property PP.

Note that the inductive step for each of these would require showing, by cases,
that if P(A)P(A) and P(B)P(B), then P((¬A))P((\lnot A)) and P((AB))P((A\star B)) (where
\star is one of ,,,    \land, \lor, \rightarrow, \iff). Formulae A,BA,B have smaller
nn values than (¬A)(\lnot A) and (AB)(A\star B).

We concoct a principle of induction which can be used in this case:

Structural Induction

The Principle of Structural Induction says: Suppose RR is a property. If:

  1. For any atomic formula pp in Atom(Lp)Atom(L^p), RR holds for pp
  2. For any formula AA in Form(Lp)Form(L^p), if R(A)R(A), then (¬A)(\lnot A) has property
    RR too
  3. For all formulae A,BA,B in Form(Lp)Form(L^p), if R(A)R(A) and R(B)R(B), then
    (AB)(A\star B) (where \star is one of ,,,    \land, \lor, \rightarrow, \iff) has
    property RR too.

then, any formula in Form(Lp)Form(L^p) has property RR.

Here is an example of structural induction in action. Suppose we want to prove
the following lemma, which is the first of the claims in ()(\clubs).

Lemma: Every well-formed formula has an equal number of left and right
parentheses.

Proof: Before starting, for convenience, we define: for all formulae AA
in Form(Lp)Form(L^p), l(A)l(A) is the number of left parentheses in AA and r(A)r(A) is
the number of right parentheses in AA.
Also, let the property being proven be called property RR.

Base Case: If AA is an atom, then it has no parentheses. Therefore,
l(A)=r(A)=0l(A) = r(A) = 0 and so R(A)R(A). The base case holds.

Inductive Step: In this step, we shall consider three formulae A,BA,B and CC,
for which it the inductive hypothesis that R holds for A,BA,B and CC is
assumed.

  • Assume AA is of the form (¬B)(\lnot B). By the inductive hypothesis, l(B)=r(B)l(B) = r(B).
    Therefore:

    l((¬B))=1+l(B)[By inspection]=1+r(B)[By inductive hypothesis]=r((¬B))[By inspection]\begin{aligned} l((\lnot B)) &= 1 + l(B) \hspace{1em}\text{[By inspection]}\\ &= 1 + r(B) \hspace{1em}\text{[By inductive hypothesis]}\\ &= r((\lnot B)) \hspace{1em}\text{[By inspection]} \end{aligned}

  • Assume AA is of the form (BC)(B\star C). By the inductive hypothesis,
    l(B)=r(B)l(B) = r(B) and l(C)=r(C)l(C) = r(C). Therefore:

    l((BC))=1+l(B)+l(C)[By inspection]=1+r(B)+r(C)[By inductive hypothesis]=r((BC))[By inspection]\begin{aligned} l((B\star C)) &= 1 + l(B) + l(C) \hspace{1em}\text{[By inspection]}\\ &= 1 + r(B) + r(C) \hspace{1em}\text{[By inductive hypothesis]}\\ &= r((B\star C)) \hspace{1em}\text{[By inspection]} \end{aligned}

Therefore, property RR holds for all well-formed formulae in LpL^p.

We can now move on and use the tool of structural induction as the proof method
for the unique readability theorem. Before proceeding, a small discussion to
build some intuition about the the proof...

Let us try to see what happens when trying to interpret a well-formed formula
in LpL^p as something that it is not. Consider the implication
F=((pq)r)F=((p\land q) \rightarrow r). We want to show that this is an implication and
not a conjunction. Let's do both to see where parsing the formula as a
conjunction fails.

If we parse the formula in FF as an implication, it would look something like:
let A=(pq)A=(p\land q) and B=rB=r, then F=(AB)F=(A\rightarrow B), where AA and BB are
well-formed formulae in LpL^p (one can check by following the rules of
formation until the formula has been constructed from the atoms p,q,rp,q,r).

Now, if we try to parse the formula as a conjunction, here is what it would
look like: let A=(pA'=(p and B=q)rB'=q) \rightarrow r, and F=(AB)F=(A'\land B'). However,
this is not a valid parse since AA' and BB' are not valid formulae (by the
lemma just proven above about equal number of left and right parentheses in a
well-formed formula in LpL^p. They also fail to be constructible from the rules
of formation).

What does this show us? We managed to prove that FF cannot be parsed as a
conjunction because upon decomposition, one does not end up with two
well-formed formulae as specified by LpL^p. But can we be sure that this holds
all the time?

Unique Readability Theorem

Now, let us approach the unique readability theorem by rephrasing it as a
property P(n)P(n) which is stated below. Note that the notation l(x)l(x) and r(x)r(x)
for the number of left and right parentheses in expression xx will continue to
be used.

Property P(n)P(n):

Every formula, AA, containing at most nn connectives satisifies all of the
following conditions:

  1. The first symbol of AA is either a '(' (right parenthesis) or a
    propositional variable
  2. A has an equal number of right and left parentheses and every
    proper initial segment of AA has more left parentheses than right.
  3. A has a unique construction as a formula.

Where a proper initial segement is defined to be a non-empty expression xx
such that AA is xyxy for some non-empty expression yy.

Proof: (Using structural induction, of course)

Base case: The base case is n=0n=0, which is a formula with no logical
connectives. This has to be a propositional variable, so condition 1 is
satisfied. Secondly, a propositional variable has no parentheses so the first
part of condition 2 is satisfied. Since AA, in this case has no proper initial
segments, the second part of condition 2 is vacuously satisfied.

Inductive step: (This is going to be a bit longer...) The inductive
hypothesis is that the property P(k)P(k) holds for some 0<kN0 < k\in \mathbb{N},
meaning that it holds for formulae with at most kk logical connectives.
We now need to prove, using this hypothesis, that PP holds for k+1k+1.
So let AA be a formula with k+1k+1 logical connectives.

Unary connective: If AA is of the form (¬B)(\lnot B) for some formula BB,
which has kk connectives and therefore satisfies conditions 1, 2 and 3, by the
inductive hypothesis.

  1. Clearly, the first symbol of AA is a '(', so condition 1 is fulfilled.
  2. Since BB satisfies condition 1, (¬B)(\lnot B) has an equal number of left
    and right parentheses too- AA satisfies the first half of the second
    condition. Now to prove the second half, we analyse the possible proper
    initial segments, xx, of AA:
    • If xx is "((", then l(x)>r(x)l(x) > r(x)
    • If xx is "(¬(\lnot", then l(x)>r(x)l(x) > r(x)
    • If xx is "(¬(\lnot"zz, where zz is some proper initial segment of BB,
      since BB satisfies condition 2, we have that l(z)>r(z)l(z) > r(z) but
      l(x)=1+l(z)l(x) = 1 + l(z) and r(x)=r(z)r(x) = r(z) which means that l(x)>r(x)l(x) > r(x).
    • If xx is "(¬(\lnot"BB, then since l(B)=r(B)l(B) = r(B), we have that
      l(x)>r(x)l(x) > r(x).
      Together, these cases show that AA satisfies condition 2.
  3. Since BB has a unique construction as a formula, so does AA- It is the
    negation of the uniquely constructued formula BB.

Binary Connectives: If AA is of the form (BC)(B\star C) where \star
represents a logical binary connective and B,CB,C are formulae with at most kk
connectives, property PP holds for them.

To prevent repetition, it is simply stated that the proof of the conditons 1
and 2 is very similar to the ones done for the proof in the unary connecive
case. What remains is to prove the 3rd condition- let's do it!

To prove the unique construction of AA, we shall prove that if AA can be
decomposed as (AB)(A'\star B') for some binary connective \star and formulae
A,BA',B', then it must be the case that B=BB'=B and C=CC'=C.
For clarity, we mean that the two decompositions,
(BC)(B'\star C') and (BC)(B\star C), are of the same formula (same length and
sequence of symbols in the same order).

  • Case 1: If BB' has the same length as BB, they must be the exact same
    string of symbols as they both begin at the second character of AA.

  • Case 2: If BB' is a proper prefix of BB (i.e. BB' has length less than
    the length of BB). Now, since BB' and BB are both formulae with at most
    kk connectives, the inductive hypothesis holds for them.

    But this would mean that by condition 2, BB' has the same number of left and
    right parentheses. However, by the same condition applied to BB, we see that
    any proper prefix of BB (namely BB') must have fewer left than right
    parentheses. So assuming that BB' is a proper suffix of BB clearly
    contradicts itself and therefore must be false.

  • Case 3: Proven same as case 2 above.

Since case 2 and 3 contradict themselves, the only valid case is case 1, which
rightarrow that B=BB'=B (and also C=CC'=C). Therefore, the two decompositions of AA
are the exact same and this shows that AA is uniquely constructed.
\blacksquare

Fun fact: In mathematical proofs, the black square (\blacksquare) indicates
the end of a proof. It is used in the same manner here.

This proof proved the unique readability theorem (condition 3), but conditions
1 and 2 were required to prove it.

Consequences

Equipped with the a proven unique readability theorem, we can claim the
following now:

\dagger A propositional variable is called an atom

\dagger (¬B)(\lnot B) is called a negation formula

\dagger (AB)(A \land B) is called a conjunction formula

\dagger (AB)(A \lor B) is called a disjunction formula

\dagger (AB)(A \rightarrow B) is called a implication formula

\dagger (A    B)(A \iff B) is called an equivalence formula

... and each is exactly of one kind.

This is important, as stated before, because the meaning of these formulae is
going to be derived from their syntax, so this ensures unambiguity.

Connective Precedence

For human beings, the following precedence rules are established:

  • ¬\lnot takes precedence over \land
  • \land takes precedence over \lor
  • \lor takes precedence over \rightarrow
  • \rightarrow takes precedence over     \iff

This allows some parentheses to be elided without loss of meaning or ambiguity:

p    qrp\iff q \rightarrow r can be expanded, using these precedence rules, to be
((p    q)r)((p\iff q) \rightarrow r).

Scope

  • If AA is of the form (¬B)(\lnot B), then we say that BB is the
    scope of negation in AA.
  • If AA is of the form (BC)(B\star C) (where \star is a binary logical
    connective), then we say that BB is the left scope of the connective
    (disjunction, conjunction, ...) in AA and CC is the right scope of the
    connective in AA.

Semantics

So far, the syntax of the propositional language has been covered through the
structure of well-formed formulae in LpL^p. Now, we want to be able to analyse
the meaning of a syntactically valid/well-formed formula. This is done through
the specifications of semantics (grammar) for propositional language.

Syntax cared about when a formula was valid/invalid. Now, we care about when a
formula is true. We shall develop methods which will allow analysis of formulae
to determine truths, validity of arguments and so on. Let's begin.

The meaning of a formula is defined to be its truth value, which is computed
from the truth values of its constitient atomic formulae/propositions. This
means that one has to first parse the formula into its atoms. Let's look at an
example. Take the following compound sentence: "If you take a class in computer
science and if you do not understand the concept of recursion, you will not
pass". The question is- when exactly is this statement true?

To begin the analysis, we decompose the statement into its atomic propositions.

  • p=p = "You take a class in computer science"
  • q=q = "You understand the concept of recursion"
  • r=r = "You will pass"

Using these atoms, the statement can be expressed as a formula in LpL^p as
((p¬q)¬r)((p\land \lnot q) \rightarrow \lnot r) (note how the rule for precedence of
negation over conjunction and implication applies in this formula). Now, let us
see how this statement evaluates for all possible truth value combinations of
p,q,rp,q,r. Each sub-formula, by the unique readablility theorem, is of one form
only. This form is with respect to some logical operator and the truth table
for that operator is used to evaluate the sub-formula with the sub-formula
already evaluated. As you can see, this is a bottom-up process- starting with
the atomic formulae and building up, through the sub-formulae, all the way to
the compound formula in question.

pp qq rr (¬q)(\lnot q) (p¬q)(p\land \lnot q) (¬r)(\lnot r) ((p¬q)¬r)((p\land \lnot q) \rightarrow \lnot r)
1 1 1 0 0 0 1
1 1 0 0 0 1 1
1 0 1 1 1 0 0
1 0 0 1 1 1 1
0 1 1 0 0 0 1
0 1 0 0 0 1 1
0 0 1 1 0 0 1
0 0 0 1 0 1 1

Clearly, the only way that this formula can evaluate to false is when p,q,rp,q,r
have truth values of 1,0,01,0,0 respectively i.e. when "You take a class in
computer science and you do not understand the concept of recuresion but you
pass".

We can now generalise what we just did through truth valuations.


Truth Valuations

A truth table is a list of the valiues that a formula takes under all
possible truth valuations. Fix a set {0,1}\{0,1\}. We decide to interpret 0 as
false and 1 as true.

Let AA be a formula and {p1,,pn}\{p_1,\dots,p_n\} be the set of propositional
variables that are contained in AA.
A truth valuation, tt, is a function from a set of propositional variables
{p1,,pn}\{p_1,\dots,p_n\} to the set {0,1}\{0,1\}. The action of the valuation tt on
the formula AA is denoted as AtA^t. More formally, tt is a function
t:{p1,,pn}{0,1}t:\{p_1,\dots,p_n\}\mapsto \{0,1\}.

It is important to realise that a truth table is a collection of truth
valuations, each for a possible combination of the truth values for the
constituent propositional variables.
In the truth table above, the first three columns p,q,rp,q,r specify the truth
values for the propositional variables. If there are nn propositional
variables, there will be 2n2^n rows in the truth table (convince yourself).
The next few columns are intermediate steps to the final column, which is
the value of the truth valuation, evaluated at the truth values for the
propositional variables on that row.

The beauty of the framework that we have created to do logic within is that
it is compatible with recursion- which is essentially when a definition is
self-referential. This will be useful when defining what a truth valuation for
a general formula in LpL^p is (hint: compare the idea of recursion to the
process performed in the truth table above- the definition of a truth valuation
is essentially that process).

Definition: Fix a truth valuation tt. The
value of the formula CC in Form(Lp)Form(L^p) under the truth valuation tt is
denoted by CtC^t and is defined recursively as follows:

  1. ptp^t is given by the definition of tt, for every propositional variable,
    pp.
  2. (¬A)t={1 if At=00 if At=1(\lnot A)^t = \begin{cases} 1 &\text{ if } A^t =0\\ 0 &\text{ if } A^t =1\\ \end{cases}
  3. (AB)t={1 if At=Bt=10 otherwise (A\land B)^t = \begin{cases} 1 &\text{ if } A^t = B^t = 1\\ 0 &\text{ otherwise }\\ \end{cases}
  4. (AB)t={1 if At=1 or Bt=10 otherwise (A\lor B)^t = \begin{cases} 1 &\text{ if } A^t = 1 \text{ or } B^t = 1\\ 0 &\text{ otherwise }\\ \end{cases}
  5. (AB)t={1 if At=0 or Bt=10 otherwise (A\rightarrow B)^t = \begin{cases} 1 &\text{ if } A^t = 0 \text{ or } B^t = 1\\ 0 &\text{ otherwise }\\ \end{cases}
  6. (A    B)t={1 if At=Bt0 otherwise (A\iff B)^t = \begin{cases} 1 &\text{ if } A^t = B^t\\ 0 &\text{ otherwise }\\ \end{cases}

Suppose AA is pqqrp\lor q \rightarrow q\land r and tt is a truth valuation under
which pt=qt=rt=1p^t = q^t = r^t = 1. We follow the precedence rules to work with this
unparenthesized formula: (pq)t=1(p\lor q)^t = 1 and (qr)t=1(q\land r)^t = 1, therefore
At=1A^t = 1. Another truth valuation, t1t_1 could be such that pt1=1p^{t_1} = 1 and
qt1=rt1=0q^{t_1} = r^{t_1} = 0. Then (pq)t1=1(p\lor q)^{t_1}=1 and (qr)t1=0(q\land r)^{t_1}=0
so that At=0A^t=0. This shows that different truth valuations can lead to
different truth values for the subformulae and the formula at large.

Note: We now know how to form propositional formulae from everyday compound
sentences and evaluate them under specific truth values of the constituent
atoms/propositional variables. Now, we proceed to analyse entities larger than
sentences- arguments. We shall see how to relate a collection of
propositional formulae (premises) to another propositional formula (the
conclusion), similar to the discussions at the beginning of this document.
What we do now is to make some definitions which will allow us to analyse
arguments in this LpL^p framework we have studied.

Definition: We say that a truth valuation tt satisfies a formula AA
in Form(Lp)Form(L^p) iff At=1A^t=1.

The Greek letter Σ\Sigma is used to denote an arbitrary set of formulae.

Define:

Σt={1 if for each formula BΣ,Bt=10 otherwise \Sigma^t = \begin{cases} 1 &\text{ if for each formula } B\in\Sigma, \hspace{.5em} B^t = 1\\ 0 &\text{ otherwise } \end{cases}

Definition: A set of formulae ΣForm(Lp)\Sigma\subseteq Form(L^p) is said to be
satisfiable iff there exists a truth valuation tt such that Σt=1\Sigma^t = 1.
When this happens, one says that Σ\Sigma is satisfied under tt.

  • Observe that if Σt=1\Sigma^t=1, then all formulae in Σ\Sigma must evaluate
    to be true under the truth valuation tt.
  • If however, Σt=0\Sigma^t =0, then at least one formula in Σ\Sigma must
    evaluate to be false under the truth valuation tt. This is important to
    understand because it is easily misunderstood that Σt=0\Sigma^t=0 means that
    every formula in Σ\Sigma evaluates to be false under the truth valuation
    tt- this is not true!

Example: We explore satisfiability through the example of a 4x4 Sudoku game
analysis. In 4x4 Sudoku, each number 1,2,3,41,2,3,4 must appear once in each row,
once in each column and oncee in each 2x2 block in the partitioned 4x4 grid.

Before we write any formulae, we setup our propositional variables. Let
pi,j,kp_{i,j,k} be the proposition that "the cell in the ii-th row, jj-th column
contains a kk", where 1i,j,k41\leq i,j,k\leq 4 of course.

To verify that a sudoku game is winnable, we need to verify that the game's
rules can be satisfied. So, given a 4x4 Sudoku grid with nn starting values
{k1,,kn}\{k_1,\dots,k_n\}, in positions {(i1,j1),,(in,jn)}\{(i_1,j_1), \dots, (i_n,j_n)\} (format
(row,column)(row, column)).

Let's start off with a set Σ0={}=\Sigma_0 = \{\} = \emptyset containing all of the
formulae (representing conditions of the game) that have to be satisfied in
order to win. We shall populate it with conditions now:

  • To verify that a solution for the given grid is indeed for that given grid,
    one can start by verifying that the starting values match. This can be done
    by satisfying each formula in
    Σ1={pi1,j1,k1,pi2,j2,k2,,pin,jn,kn}\Sigma_1 = \{p_{i_1,j_1,k_1}, p_{i_2,j_2,k_2}, \dots, p_{i_n,j_n,k_n}\}.
    We can now add each formula in Σ1\Sigma_1 to Σ\Sigma above.

  • Next, to ensure that there is at most one digit per position in the grid,
    we can do the following. For each position (i,j)(i,j) and each pair of non-equal
    possible values k,kk,k', we add the formula
    pi,j,k¬pi,j,kp_{i,j,k}\rightarrow \lnot p_{i,j,k'} to the set Σ\Sigma.

  • Next, to verify that each digit appears once in each row. First, we can check
    that for each row, each digit appears at least once. This can be done by
    adding the formula
    (pi,1,kpi,2,kpi,3,kpi,4,k)(p_{i,1,k}\land p_{i,2,k}\land p_{i,3,k}\land p_{i,4,k}) for i,ki,k
    both ranging over {1,2,3,4}\{1,2,3,4\}.

    Now, to make sure that each number appears once in each row, we look at every
    pair of cells (i,j)(i,j) and (i,j)(i,j') in each row and add the following formula
    pi,j,k¬pi,j,kp_{i,j,k}\rightarrow \lnot p_{i,j',k} for all values of kk.

  • To verify that the columns are properly filled, the same process as the rows
    is performed, with the necessary index changes.

  • The same goes for verifying that the 2x2 boxes in the 4x4 grid are also
    properly filled- also with the indexing changes required.

Once all of this has been done, Σ\Sigma contains all of the formulae which all
have to be true in order for the Sudoku grid to be considered solved. This
condition can be checked by forming one large formula by taking the conjunction
of each of the formulae in Σ\Sigma.

Definition: A formula AA is a tautology iff it is true under all
possible truth valuations tt.

Definition: A formula AA is a contradiction iff it is false under all
possible truth valuations tt.

Definition: A formula that is neither a tautology nor a contradiction is
said to be contingent.

Example This is an important tautology that we have all dealt with in real
life: the law of the excluded middle ("tertium non datur") states that
p¬pp\lor \lnot p is a tautology. Let us check this by drawing out the truth
table and verifying that it evaluates to be true under all possible truth
values:

pp ¬p\lnot p p¬pp\lor \lnot p
1 0 1
0 1 1

Now, you may be thinking of why this is important. Well, tautologies can be
considered to be a method of generating statements that are always true (a way
of bootstrapping towards truth, if you may...). For example, take the formula
(qr)(q\land r), which is clearly contingent. However, using the tautology
(p¬p)(p\land\lnot p), one can be sure that the formula obtained by replacing
all instances of pp in the tautology with (qr)(q\land r), namely
((qr)¬(qr))((q\land r)\lor \lnot (q\land r)) is also a tautology:

We formalise and generalise this...

Theorem: Let AA be a tautology and let p1,,pnp_1,\dots,p_n be the
propositional variables of AA. Suppose that A1,,AnA_1,\dots,A_n are arbitrary
fomulae, then the formula obtained by replacing p1p_1 by A1A_1, p2p_2 by A2A_2,
..., pnp_n by AnA_n is a tautology.

Example This is an important contradiction, which presumably, we have all
encountered in one way or another. It is law of contradiction which says
that "Nothing can both be, and not be" i.e. (p¬p)(p\land \lnot p). Again, we check
the truth table for this formula to ensure that it evaluates to false under all
possible truth values.

pp ¬p\lnot p p¬pp\land \lnot p
1 0 0
0 1 0

Note: Some eagle-eyed readers must have noticed the inherent connection
between tautologies and contradictions- you cannot have one without the other.
AA is a tautology iff ¬A\lnot A is a contradiction. Can you see why?

These can be summarized through Plato's three essential laws of thought:

  • Law of identity: "Whatever is, is". p=pp=p
  • Law of contradiction: "Noting can both be, and not be".
    ¬(p¬p)\lnot(p\land \lnot p)
  • Law of excluded middle: "Everything must either be or not be".
    (p¬p)(p\lor \lnot p)

Argument Validity

As stated in the beginning of this document, the idea is to only analyse the
structure of an argument. That is why propositional language is being used- to
abstract out the content of an argument and narrow focus to the structure. A
typical argument states a series of propositions (called the premises) and
using those premises, states a final proposition (called the conclusion).
If the argument is correct (valid, sound), the the conclusion is guaranteed to
be correct provided that the content of the premises is correct. We formalize
these ideas now.

Definition Let ΣForm(Lp)\Sigma \subseteq Form(L^p) and AForm(Lp)A\in Form(L^p). AA is
said to be a tautological consequence of the formulae in Σ\Sigma iff for
any truth valuation tt, we have that Σt=1\Sigma^t=1 implies that At=1A^t=1. When
this is the case, it is denoted as ΣA\Sigma \models A.

Note the following:

  • "\models" is NOT a valid symbol in LpL^p (as per the specifications above)
    and so ΣA\Sigma\models A is not a formula. Instead, it is a statement made
    about Σ\Sigma and AA.
  • When it is not the case that ΣA\Sigma\models A, this is denoted as
    Σ⊭A\Sigma\not\models A.
  • When ΣA\Sigma \models A, we say that the formulae in Σ\Sigma
    (tauto)logically imply the formula AA or that Σ\Sigma
    semantically entails AA.

Special Case: What if Σ=={}\Sigma = \emptyset = \{\}? By the definitions we
just made, if Σ=A\Sigma=\emptyset \models A, then for every valuation tt, if
t=1\emptyset^t =1 then At=1A^t = 1. Now, t=1\emptyset^t = 1 means that for every
formula BB in Σ\Sigma, Bt=1B^t=1. But since there are no formulae in Σ\Sigma,
Σt=1\Sigma^t = 1 is vacuously true. As a result, ΣA\Sigma \models A is a
tautology.

Arguing intuitively, ΣA\Sigma\models A means that the truth of the formule in
Σ\Sigma is enough for AA to be true. But since Σ\Sigma has no formulae,
the antecedent of the condition is false and so the truth value of AA is
unconditional, therefore it is a tautology,

Argument Validity and Satisfiability

Let Σ={A1,,An}Form(Lp)\Sigma = \{A_1,\dots,A_n\}\subseteq Form(L^p) be a set of formulae (premises)
and let CForm(A)C\in Form(A) be a formula (conclusion). The following are equivalent:

  • The argument with premises {A1,,An}\{A_1,\dots,A_n\} and conclusion CC is valid
  • (A1An)C(A_1\land \dots \land A_n)\rightarrow C is a tautology
  • (A1An¬C)(A_1\land \dots \land A_n \land \lnot C) is a contradiction
  • (A1An¬C)(A_1\land \dots \land A_n \land \lnot C) is not satisfiable
  • The set {A1An¬C}\{A_1\land \dots \land A_n \land \lnot C\} is not satisfiable
  • CC is a tautological consequence of Σ\Sigma i.e. ΣC\Sigma\models C

Note: Here are some key points that one ought to remeber about validity:

Consider and argument comprised of the premises
{A1,,An}Form(A)\{A_1,\dots,A_n\}\subseteq Form(A) and conclusion CForm(A)C\in Form(A).

  • A valid/sound argument does not guarantee the truth value of the conclusion.
    As mentioned before, there is another requirement for the conclusion to be
    true the truth of all the premises. Formally, the conclusion CC is true iff
    • The argument with premises {A1,,An}\{A_1,\dots,A_n\} and conclusion CC is
      valid and
    • The premises A1,,AnA_1,\dots,A_n are all true
  • TLDR: a valid argument (sound reasoning) only guarantees the truth of a
    conclusion if the argument is based on true premises (assumptions).

Definition: For two formulae A,BForm(A)A,B\in Form(A), the notation
A ⁣BA \models\!\mid B is used to denote that ABA\models B and BAB\models A.
AA and BB are said to be tautologically equivalent (or just equivalent)
iff A ⁣BA\models\!\mid B holds.

Formulae that are tautologically equivalent have the property that they are
assigned the same truth values under any truth valuation.

Notes:

  • "(Tauto)logically implies" ABA\models B is different from the meaning of
    the implication (AB)(A\rightarrow B). By its definition, ABA\models B iff
    (AB)(A\rightarrow B) is a tautology.
  • "(Tauto)logically equivalent" (A ⁣BA\models\!\mid B) is different from the
    meaning of the equivalence (A    B)(A\iff B). By its definition, A ⁣BA\models\!\mid B
    iff (A    B)(A\iff B) is a tautology.

Validity Proofs with Truth Tables

One way to verify the statement ΣA\Sigma\models A (prove the validity of the
argument with premises Σ\Sigma and conclusion AA), it is sufficient to show
that all truth valuations tt which satisfy Σ\Sigma also satisfy AA (note
that there can exist truth valuations which satisfy AA but not Σ\Sigma and
this is fine).

Here's an example showing that
{pq,qr}(pr)\{p\rightarrow q,q\rightarrow r\}\models (p\rightarrow r) using a truth table.
Letting A1=(pq)A_1=(p\rightarrow q) and A2=(qr)A_2=(q\rightarrow r) be the premises:

ROW # pp qq rr (pq)(p\rightarrow q) (qr)(q\rightarrow r) (A1A2)(A_1\land A_2) (pr)(p\rightarrow r)
1 1 1 1 1 1 1 1
2 1 1 0 1 0 0 0
3 1 0 1 0 1 0 1
4 1 0 0 0 1 0 0
5 0 1 1 1 1 1 1
6 0 1 0 1 0 0 1
7 0 0 1 1 1 1 1
8 0 0 0 1 1 1 1

The truth valuations represented by rows 1,5,7,8 are the ones under which the
premises evaluate to be true. What is important is what when the premises are
true, the conclusion is also true. This shows that
{pq,qr}(pr)\{p\rightarrow q,q\rightarrow r\}\models (p\rightarrow r).

Therefore, the argument

  1. (pq)(p\rightarrow q)
  2. (qr)(q\rightarrow r)

Conclusion: (pr)(p\rightarrow r)

is a valid argument. Try it- let p,q,rp,q,r be any propositions for which the
premises above are true- the conclusion prp\rightarrow r will also be true.

Now, to prove that an argument is NOT valid, one has to repeat the same truth
table drawing as above for that argument, and find at least one truth valuation
under under which the premises are satisfied, but not the conclusion. That is
enough to prove the invalidity of the argument.

There's a problem with this method of validating arguments though. For a
formula with nn variables and mm connectives, the truth table has 2n2^n rows
and at most n+mn+m columns. This gets out of hand quickly when the number of
variables is larger.

Validity without Truth Tables

In this case, we can be smart and use the following proof methods.

Proof by Contradiction: Remember in logic, a contradiction is a statement
which is never true. In this context, the word contradiction is used
differently. The proof method works by assuming the opposite of what is being
proven and showing the absurdity of that.

Say, again, that we want to show that
{AB,BC}AC\{A\rightarrow B,B\rightarrow C\}\models A\rightarrow C.

Proof by contradiction: Assume the opposite-
{AB,BC}⊭AC\{A\rightarrow B,B\rightarrow C\}\not\models A\rightarrow C.

This means that there exists a truth valuation under which the premises are
true but not the conclusion i.e. there exists a truth valuation tt such that:

  1. (AB)t=1(A\rightarrow B)^t=1
  2. (BC)t=1(B\rightarrow C)^t=1
  3. (AC)t=0(A\rightarrow C)^t=0

Here's what one can deduce from this: by the 3rd line, we it must be that
At=1A^t = 1 and Ct=0C^t = 0. But if At=1A^t=1, this must mean that Bt=1B^t=1 by the 1st
line. But if Bt=1B^t=1, this means that Ct=1C^t=1 to satisfy the 2nd line.
Wait what? By the third line, Ct=0C^t = 0 but by the 2nd line, Ct=1C^t = 1, which
shows that this is an absurd situation. For it to be true, CtC^t would need to
both be and not be 1- which is never true. Therefore, the assumption that we
made at the beginning, that
{AB,BC}⊭AC\{A\rightarrow B,B\rightarrow C\}\not\models A\rightarrow C is false.
As a result, {AB,BC}AC\{A\rightarrow B,B\rightarrow C\}\models A\rightarrow C is true,
proving that the argument is valid.

Proof by Counterexample: This method works by constructing a truth
valuation under which the premises are true, but the conclusion is false.

Suppose we want to show that
{(p¬q)r,q¬r,p    r}⊭(¬p(qr))\{(p\rightarrow \lnot q)\lor r, q\land \lnot r, p\iff r\}\not\models (\lnot p\land (q\rightarrow r)).

Proof by counterexample: By any method you can think of, construct a
counterexample such as the truth valuation tt under which pt=0p^t = 0, qt=1q^t = 1
and rt=0r^t = 0. Under this:

  • Premise 1: ((p¬q)r)t=1((p\rightarrow \lnot q)\lor r)^t = 1
  • Premise 2: (q¬r)t=1(q\land \lnot r)^t = 1
  • Premise 3: (p    r)t=1(p\iff r)^t = 1
  • Conclusion: (¬p(qr))t=0(\lnot p\land (q\rightarrow r))^t = 0

So we have a counterexample- the truth valuation tt. Therefore, this argument
is not valid.

There are smarter ways however...

Important Tautological Equivalences

Consider the following statements:

  • "He is either not informed or he is not honest"
  • "It is not true that he is informed and honest"

These two statements seem to be logically equivalent. Let's prove this.
Let p=p= "he is honest" and q=q= "he is informed". The first statement
translates to (¬p¬q)(\lnot p \lor \lnot q) and the second one to
(¬(pq))(\lnot (p\land q)).

We have De Morgan's Law:
(¬(pq)) ⁣(¬p¬q)(\lnot (p\land q)) \models\!\mid (\lnot p \lor \lnot q)

pp qq ¬p\lnot p ¬q\lnot q pqp\land q ¬p¬q\lnot p \lor \lnot q (¬(pq))(\lnot (p\land q))
1 1 0 0 1 0 0
1 0 0 1 0 1 1
0 1 1 0 0 1 1
0 0 1 1 0 1 1

De Morgan's Laws are used to negate conjunctions and disjunctions. The one
above can be summarized as "To negate a conjunction, take the conjunction of
the negation of the conjuncts".

We also have Dual De Morgan's Law:
(¬(pq)) ⁣(¬p¬q)(\lnot (p\lor q))\models\!\mid (\lnot p\land \lnot q). Again, this can be
readily shown by drawing out a truth table. This one says: "To negate a
disjunction, take the conjunction of the negation of the disjuncts".

Next, consider the following statements:

  • "If the goods were not delivered, the customer cannot have paid"
  • "If the customer has paid, the goods must have been delivered"

Let p=p= "the goods were delivered" and q=q= "the customer has paid", then the
two sentences above translate to (¬p¬q)(\lnot p \rightarrow \lnot q) and
(qp)(q\rightarrow p).

The two (¬p¬q)(\lnot p \rightarrow \lnot q) and (qp)(q\rightarrow p) are called
contrapositives of each other. The table below shows that these are
tautologically equivalent i.e.
(¬p¬q) ⁣(qp)(\lnot p \rightarrow \lnot q) \models\!\mid (q\rightarrow p)

pp qq ¬p\lnot p ¬q\lnot q ¬p¬q\lnot p \rightarrow \lnot q (qp)(q\rightarrow p)
1 1 0 0 1 1
1 0 0 1 1 1
0 1 1 0 0 0
0 0 1 1 1 1

The contrapositive can be useful in proofs when it is easier to prove
¬q¬p\lnot q\rightarrow \lnot p than pqp\rightarrow q.

Contrapositive VS Converse: It is easy to get these two confused and this
leads to logical errors since they are not tautologically equivalent.

  • The contrapositive of (pq)(p\rightarrow q) is (¬q¬p)(\lnot q\rightarrow \lnot p)
    and it is equivalent to the formula
  • The converse of (pq)(p\rightarrow q) is (qp)(q\rightarrow p) and it is NOT
    equivalent to the formula

Another important tautological equivalence is the biconditional which
states that (p    q) ⁣((pq)(qp))(p\iff q)\models\!\mid ((p\rightarrow q)\land (q\rightarrow p)).
We can see this in the following truth table:

pp qq pqp\rightarrow q qpq\rightarrow p (pq)(qp)(p\rightarrow q)\land (q\rightarrow p) p    qp\iff q
1 1 1 1 1 1
1 0 0 1 0 0
0 1 1 0 0 0
0 0 1 1 1 1

The biconditional is important in proofs: when proving p    qp\iff q, proving
pqp\rightarrow q and qpq\rightarrow p is sufficient.

Lemma: If A ⁣AA\models\!\mid A' and B ⁣BB\models\!\mid B'

  1. ¬A ⁣¬A\lnot A \models\!\mid \lnot A'
  2. AB ⁣ABA\land B \models\!\mid A'\land B'
  3. AB ⁣ABA\lor B \models\!\mid A'\lor B'
  4. AB ⁣ABA\rightarrow B \models\!\mid A'\rightarrow B'
  5. A    B ⁣A    BA\iff B \models\!\mid A'\iff B'

Theorem [Replacability of Equivalent Formulae]: Let B ⁣CB\models\!\mid C.
If AA' results from replacing some (not necessarily all) instances of BB in
AA by CC, then A ⁣AA\models\!\mid A'.

Proof: [An exercise in structural induction]

Theorem [Negating a formula]: Suppose AA is a formula composed of some
atoms and the connectives ¬,,\lnot,\land,\lor only, by the formation rules.
Suppose that Δ(A)\Delta(A) results from replacing \land with \lor and \lor
with \land in AA, as well as replacing each propositional variable with its
negation. Then A ⁣¬Δ(A)A\models\!\mid \lnot \Delta(A).

Proof: [An exercise in structural induction]

For example, if we have that A=(p¬q)(¬rs)A=(p\land\lnot q)\land(\lnot r\land s). The
negation of AA by the above theorem is
¬A=(¬pq)(r¬s)\lnot A = (\lnot p\lor q)\lor (r\lor \lnot s).

Formula Simplification, Essential Laws and Normal Forms

Here, we draw a similarity between simple algebraic manipulations and
simplification of formulae in the propositional language. In algrebra, the
expression (a+b)b(a + b) - b is clearly yields aa. But how do we know this? The
rules that were followed to reach the deduction that the expression evaluated to
aa were the associativity of addition, and how zero is defined i.e. yy=0y-y=0 and
x+0=xx+0=x. In essence, this helped us reach a simplified form of the expression,
which can help to make it easier to understand and work with.

Now, consider the following propositional formula (pq)¬q(p \land q) \land \lnot q.
When seeking a simplification of this formula, the process is very similar to
the algebraic one above. The difference is that in this case, the algebraic
rules are actually (tauto)logical equivalences. We use the following
equivalences in order to simplify this logical formula:

(AB)C ⁣A(BC)A¬A ⁣0A0 ⁣0\begin{aligned} (A\land B) \land C &\models\!\mid A\land (B\land C) \\ A\land \lnot A &\models\!\mid 0 \\ A\land 0 &\models\!\mid 0 \end{aligned}

Using these equivalences above, we can do the following:

(pq)¬q ⁣p(q¬q) ⁣p0 ⁣0\begin{aligned} (p\land q) \land\lnot q &\models\!\mid p\land (q\land \lnot q)\\ &\models\!\mid p\land 0 \\ &\models\!\mid 0 \end{aligned}

Simplification of Conditonals and Biconditionals

Since the symbolic treatment of conditionals and biconditionals is relatively
cumbersome, one usually "substitutes" them using logically equivalent formulae.
In order to remove the conditional from a formula, one pay perform a
substitution using the following equivalence:

pq ⁣¬pqp\rightarrow q \models\!\mid \lnot p \lor q

To get rid of the biconditional, one may use either of the following forms:

p    q ⁣(pq)(¬p¬q) ⁣(pq)(qp) ⁣(¬pq)(p¬q)\begin{aligned} p \iff q &\models\!\mid (p\land q)\lor (\lnot p\land \lnot q)\\ &\models\!\mid (p\rightarrow q) \land (q\rightarrow p) \\ &\models\!\mid (\lnot p\lor q) \land (p\lor \lnot q) \end{aligned}

Example:

(pqr)((r    s)(qs)) ⁣(¬pqr)(((rs)(¬q¬s))(qs))\begin{aligned} (p\rightarrow q\land r)\lor ((r\iff s) \land (q\lor s)) \models\!\mid (\lnot p \lor q\land r) \lor ( ((r\land s)\lor (\lnot q\land \lnot s)) \land (q\lor s)) \end{aligned}

Essential Laws of Propositional Calculus

Name Law
Excluded Middle p¬p ⁣1p\lor \lnot p \models\!\mid 1
Contradiction Law p¬p ⁣0p\land \lnot p \models\!\mid 0
Identity Laws p0 ⁣pp\lor 0 \models\!\mid p and p1 ⁣pp\land 1 \models\!\mid p
Dominaiton Laws p1 ⁣1p\lor 1\models\!\mid 1 and p0 ⁣0p\land 0 \models\!\mid 0
Idempotent Laws pp ⁣pp\lor p\models\!\mid p and pp ⁣pp\land p\models\!\mid p
Double-negation ¬(¬p) ⁣p\lnot(\lnot p) \models\!\mid p
Commutative Laws pq ⁣qpp\land q\models\!\mid q\land p and pq ⁣qpp\lor q\models\!\mid q\lor p
Associative Laws (pq)r ⁣p(qr)(p\lor q)\lor r \models\!\mid p\lor (q\lor r) and (pq)r ⁣p(qr)(p\land q)\land r \models\!\mid p\land(q\land r)
Distributive Laws p(qr) ⁣(pq)(pr)p\land (q\lor r) \models\!\mid (p\land q)\lor (p\land r) and p(qr) ⁣(pq)(pr)p\lor (q\land r) \models\!\mid (p\lor q)\land (p\lor r)
De Morgan's Laws ¬(pq) ⁣¬p¬q\lnot (p\land q)\models\!\mid \lnot p \lor \lnot q and ¬(pq) ⁣¬p¬r\lnot (p\lor q) \models\!\mid \lnot p \land \lnot r
  • Note that all of these laws can be readily proven using a truth table
  • All laws come in pairs (except the double negation law). These pairs are
    called dual pairs: for each formula depending only on the connectives
    ¬,,\lnot, \land, \lor, the dual is obtained by replacing each 1 by a 0 and each
    0 by a 1, all \land with \lor and finally replace all \lor with \land.
  • The laws help one simplify a formula and should therefore be applied whenever
    they can. There is no point in leaving the formula ¬¬p(q¬q)\lnot\lnot p\lor (q\land\lnot q) as it is. It is better simplified as just pp.
  • A non-perfect analogy for the commutative, associative and distributive
    laws is found in the algebraic operators +,+,-. The \land is often treated
    as ++ and \lor is often treated as - (not always...).

Here are two other laws which can be derived from the laws above. Firstly, here
are the absorption laws:

  • p(pq) ⁣pp\land (p\lor q) \models\!\mid p
  • p(pq) ⁣pp\lor (p\land q) \models\!\mid p

They can be proven using the identity, distributivity and domination laws:

p(pq) ⁣(p1)(pq)Identity ⁣p(1q)Distributivity (in reverse) ⁣p1Domination ⁣pIdentity\begin{aligned} p\lor (p\land q) &\models\!\mid (p\land 1) \lor (p\land q) &\text{Identity}\\ &\models\!\mid p \land (1\lor q) &\text{Distributivity (in reverse)}\\ &\models\!\mid p \land 1 &\text{Domination}\\ &\models\!\mid p &\text{Identity} \end{aligned}

Here is another important law, with its dual:

  • (pq)(¬pq) ⁣q(p\land q)\lor (\lnot p\land q) \models\!\mid q
  • (pq)(¬pq) ⁣q(p\lor q) \land (\lnot p \lor q) \models\!\mid q

These can be proven using distributivity in revserse, followed by the law of the
excluded middle and finally the identity laws.

Shortcuts for Simplifying Formulae

Definition: A formula is called a literal if it is of the form pp or ¬p)\lnot p), where pp is a propositional variable.
The two pp and ¬p\lnot p are called complementary literals. The following
rules allow one to simplify conjunctions only containing literals:

  • If a conjunction contains complementary literals or the propositional constant
    0, then it always yields 0 i.e. it is a contradiction
  • All instances of the propositional constant 1 and the duplicate copies of any
    literal may be dropped. If nothing remains, then the formula is a tautology -
    you can keep just a 1 to have a formula.

To simplify disjunctions, the duals of the two rules above are used.

  • If a disjunction contains complementary literals or the propositional constant
    1, then it always yields 1 i.e. it is a tautology
  • All instances of the propositional constant 0 and all duplicate copies of any
    literal may be dropped. If nothing remains, then the formula is a
    contradiction- you can keep just a 0 to have a formula.

Example: Simplifying
(p3¬p2p3¬p1)(p1p3¬p1)(p_3\land \lnot p_2 \land p_3 \land \lnot p_1)\lor (p_1\land p_3 \land\lnot p_1)

Using the rules above, this expression can be simplified to:
p3¬p2¬p1p_3\land \lnot p_2 \land \lnot p_1

Normal Forms

Just like matrices, formulae can be converted into some standard froms through
which they can become more convenient for symbolic manipulations and
identification and comparision of formulae. In propositional calculus, there are
two such forms: the Disjunctive Normal Form and the Conjunctive Normal
Form
.

Definition: A formula is said to be in disjunctive normal form (DNF) if it
is written as a disjunction, in which all the terms are conjunctions of
literals.

Example: The formulae (pq)(p¬q)(p\land q)\lor (p\land \lnot q), p(qr)p\lor (q\land r)
and ¬pt\lnot p \lor t are all in DNF.

The disjunction ¬(pq)r\lnot (p\land q) \lor r is NOT in DNF. This is because the
term ¬(pq)\lnot (p\land q) is a negation, not a conjunction.

Definition: A disjunction with literals as disjuncts is called a disjunctive
clause. Similarly, a conjunction with literals as conjuncts is called a
conjunctive clause.

Examples: (pq¬r)(p\land q\land \lnot r) is a conjunctive clause and
(pq¬r)(p\lor q\lor \lnot r) is called a disjunctive clause,

In general, disjunctive and conjunctive clauses are just called clauses,

Definition: A conjunction with disjunctive clauses as its conjuncts is said
to be in conjunctive normal form.

Examples: Each of p(qr)p\land (q\lor r), pqp\land q and p¬qp\land \lnot q is in
CNF. However, the formula p(r(pq))p\land (r\lor (p\land q)) is NOT in CNF- the term
pqp\land q in the second argumet of the first conjunction is a conjunction, not
a literal, therefore it is not a disjunctive clause.

Obtaing Normal Forms

How does one compute the normal form of a formula? The following tautological
equivalences can be used to this end.

These help to get rid of implications and equivalences from formulae.

  • AB ⁣¬ABA\rightarrow B \models\!\mid \lnot A\lor B
  • A    B ⁣(¬AB)(A¬B)A\iff B \models\!\mid (\lnot A \lor B)\land (A\lor \lnot B)
  • A    B ⁣(AB)(¬A¬B)A\iff B \models\!\mid (A\land B)\lor (\lnot A \land \lnot B)

These equivalences help to eliminate ¬,,\lnot, \land, \lor from the scope of
¬\lnot such that every ¬\lnot only has an atom in its scope.

  • ¬¬A ⁣A\lnot\lnot A \models\!\mid A
  • ¬(A1An) ⁣¬A1¬An\lnot (A_1\land\dots\land A_n) \models\!\mid \lnot A_1\lor\dots\lor\lnot A_n
  • ¬(A1An) ⁣¬A1¬An\lnot (A_1\lor\dots\lor A_n) \models\!\mid \lnot A_1\land\dots\land\lnot A_n

These equivalences help eliminate \lor from the scope of \land.

  • A(B1B2) ⁣(AB1)(ABn)A\land (B_1\lor\dots\lor B_2) \models\!\mid (A\land B_1)\lor\dots\lor(A\land B_n)
  • (B1Bn)A ⁣(B1A)(BnA)(B_1\lor\dots\lor B_n)\land A \models\!\mid (B_1\land A)\lor\dots\lor(B_n\land A)

These last equivalences help eliminate \land from the scope of \lor.

  • A(B1Bn) ⁣(AB1)(ABn)A\lor (B_1\land\dots\land B_n) \models\!\mid (A\lor B_1)\land\dots\land (A\lor B_n)
  • (B1Bn)A ⁣(B1A)(BnA)(B_1\land\dots\land B_n)\lor A \models\!\mid (B_1\lor A)\land\dots\land (B_n\land A)

This method will lead to the DNF/CNF of a formula.

Notes: The two nomrmal forms that have been discussed thus far are not
mutually exclusive i.e. a formula in CNF can also be a valid formula in DNF.
Have a look at the formula ¬pq¬r\lnot p\land q \land \lnot r, which is a
conjunction with 3 literals. It is in CNF, with three disjunctive clauses.
Alternatively, it is also in DNF, with one conjunctive clause.

On the other hand, the formula ¬p(q¬r)\lnot p \lor (q\land \lnot r) is in DNF, with
two conjunctive clauses, but it cannot be considerd to be a formula in CNF at
all. But if \lor is exchanged for \land in this formula, it is a valid
formula in CNF too, just like the example above.

Algorithm for CNF

  • Eliminate equivalences and implications using AB ⁣¬ABA\rightarrow B \models\!\mid \lnot A \lor B and A    B ⁣(¬AB)(A¬B)A\iff B \models\!\mid (\lnot A \lor B)\land (A\lor \lnot B)
  • Push negation inwards, to the variables using De Morgan's Laws
  • Recursively: CNF(A)CNF(A)
    • If AA is a literal, then return AA
    • If AA is BCB\land C, then return CNF(B)CNF(C)CNF(B)\land CNF(C)
    • If AA is BCB\lor C,
      • call CNF(B)CNF(B) and CNF(C)CNF(C)
      • Suppose CNF(B)=B1BnCNF(B) = B_1\land\dots\land B_n
      • Suppose CNF(C)=C1CmCNF(C) = C_1\land\dots\land C_m
      • Return i=1,,n   and   j=1,,m(BiCj)\land_{i=1,\dots,n\;\text{ and }\;j=1,\dots,m}(B_i\lor C_j)
        i.e. the conjunction of BiCjB_i\land C_j, for all pairs of i,ji,j.

Example: Consider the formula
((ab)(c¬ad))((¬a)(cd)(¬b¬c¬d))((a\lor b)\land (c\lor \lnot a\lor d))\lor ((\lnot a)\land (c\lor d)\land (\lnot b \lor \lnot c \lor \lnot d)). Using the recursive algorithm for CNF in
the last step above, we can expand and simplify into the DNF:
(abcd)(¬acd)(a\lor b\lor c\lor d) \land (\lnot a\lor c\lor d)

Theorem (Existance of Normal Forms): Any formula AForm(Lp)A\in Form(L^p) is
tautologically equivalent to a formula in DNF.

Proof:

  • If AA is a contradiction, then AA is tautologically equivalent to the DNF
    p¬pp\land \lnot p, where pp is any atom occurring in AA.

  • If AA is not a contradiction, then the following method can be used (note
    that this is just a demonstration of the method on an example)

    Suppose that AA has three atoms, p,q,rp,q,r and the value of AA is 1 iff 1,1,01,1,0
    or 1,0,11,0,1 or 0,0,10,0,1 are assigned to p,q,rp,q,r respectively.

    For each of these assignments, we have a conjunctive clause with three
    literals, each being either the atom or its negation, depending on the truth
    value that has been assigned to it. We have the following conjunctive clauses:
      pq¬r\;p\land q\land\lnot r,   p¬qr\;p\land\lnot q\land r,   ¬p¬qr\;\lnot p \land\lnot q\land r. We see that the first of these clauses is true iff p,q,rp,q,r have
    truth values 1,1,01,1,0, the second is true iff p,q,rp,q,r have truth values 1,0,11,0,1
    respectively and finally the third is true iff p,q,rp,q,r have truth values
    0,0,10,0,1.

    Therefore, the follwing formula in DNF is stautologically equivalent to AA.
    (pq¬r)(p¬qr)(¬p¬qr)(p\land q\land\lnot r)\lor (p\land \lnot q\land r)\lor (\lnot p\land\lnot q\land r).

  • If AA is a tautology, then the required DNF may simply be the formula
    p¬pp\lor\lnot p, where pp is any atom occurring in AA.

This completes the "proof".

The following theorem can also be proven using a similar strategy:

Theorem: Any formula AForm(A)A\in Form(A) is tautologically equivalent to a
formula in CNF.

DNF from Truth Tables

Note that from the example provided in the proof above shows that it can be
possible to construct the DNF of a formula using just its truth table.
Consider the following truth table for the formula AA with three propositional
variables p,q,rp,q,r.

pp qq rr AA
1 1 1 1
1 1 0 0
1 0 1 1
1 0 0 0
0 1 1 0
0 1 0 0
0 0 1 1
0 0 0 0

From this table, it can be seen that the truth valuations under which AA is
true are 1,1,11,1,1, 1,0,11,0,1 and 0,0,10,0,1 for p,q,rp,q,r respectively.
Using the same approach as in the theorem above, we get that

(pqr)(p¬qr)(¬p¬qr) ⁣A(p\land q\land r)\lor (p\land \lnot q\land r)\lor (\lnot p\land \lnot q\land r) \models\!\mid A

CNF from Truth Tables

The process of complementation can be used to obtain conjunctive normal
forms from truth tables. If a formula AA only contains the connectives ,,¬\land, \lor, \lnot, then its complement is the formula formed by replacing all
\lor by \land, all \land by \lor and all atoms by their negations.

Using complementation, the complement of the formula A=(pq)¬rA = (p\land q)\lor\lnot r
is ¬A=(¬p¬q)r\lnot A= (\lnot p\lor\lnot q)\land r

Note that if a formula is in DNF, then its complement, ¬A\lnot A, is in CNF.
Therefore, the CNF of a formula AA can be obtained from a truth table by first
computing the DNF of ¬A\lnot A, then taking the complement of that.

Adequate Sets of Connectives

So far, one unary connective (¬\lnot) and four binary connectives
(,,,    \land,\lor,\rightarrow,\iff) have been introduced. However, there exist many
more connectives. In fact, there are 24=162^4=16 possible binary connectives.

In general, for any nn-ary connective (one that takes nn arguments), there are
2n2^n rows in the truth table for such a connective (2n2^n truth valuations for
the nn arguments to the connective. The number of nn-ary connectives is equal
to the number of rows in the truth table, times the number of binary numbers of
length (equal to the height of the truth table) 2n2^n. So there are 2(2n)2^{(2^n)}
nn-ary connectives.

Definition: An adequate set of connectives is one that has the capability
to express all truth tables. It can be shown that the set {¬,,,,    }\{\lnot, \land, \lor, \rightarrow, \iff\} is an adequate set of connectives. This is also known as
the standard set of connectives.

The objective now is to devise a method to show that a set of connectives is
adequate.

Proofs of Adequacy

Definition: The arity of a function, ff, is the number, n0n\geq 0 of
inputs that the function ff takes. Functions with 0 inputs are called
nullary. Functions with 1 input are called unary. Functions with 2
inputs are called binary. A propositional connective, ff, with arity nn,
is f:{0,1}n{0,1}f:\{0,1\}^n \mapsto \{0,1\}.

A set, SS, of propositional connectives is said to be adequate for
propositional logic
if every propositional connective (of any arity) can
be implemented using only the connectives in SS.

Theorem: {¬,,}\{\lnot, \land, \lor\} is an adequate set of connectives.

Proof: Fix any nn-ary function f:{0,1}n{0,1}f:\{0,1\}^n \mapsto \{0,1\}.
Let p1,,pnp_1,\dots, p_n be the arguments to the function ff. We shall show that
the function ff can be implemented using only the connectives in the given set.

We already know of a method which can be used to do this- from the formation of
Disjunctive Normal Forms.

  • Write out the truth table for ff
  • For every row in the truth table, write out a conjunction with nn literals-
    the ii-th one being pip_i if, for that row, the ii-th column equals 1 and
    (¬p)(\lnot p) otherwise
  • Construct a formula AA that is the disjunction of all of the conjunctions
    formed from the truth table.

By construction, AA has the same truth table as ff and so we have managed to
implement ff using only the connectives in the given set. Therefore, the set
{¬,,}\{\lnot, \land, \lor\} is adequate for propositional logic. \blacksquare

Here is a connective that is itself adequate for propositional logic. The
Pierce arrow, denoted \downarrow, is defined through the following truth
table:

pp qq pqp\downarrow q
1 1 0
1 0 0
0 1 0
0 0 1

Indeed, the standard set of connectives can be implemented using the Pierce
arrow:

  • ¬p=pp\lnot p = p\downarrow p
  • pq=(pp)(qq)p\land q = (p\downarrow p)\downarrow (q\downarrow q)
  • pq=(pq)(pq)p\lor q = (p\downarrow q)\downarrow (p\downarrow q)

So, to test whether a set of connectives is adeqate for propositional logic, it
suffices to show that the Pierce arrow can be implemented using that set.

Another one of these beastly connectives is the Sheffer stroke, which is denoted
as |. It is defined as follows:

pp qq pp|qq
1 1 0
1 0 1
0 1 1
0 0 1

The Sheffer stroke is also known as NAND (negation of \land).
The set of connectives {}\{|\} is also adequate. This can be shown by
implementing the Pierce arrow using the Sheffer stroke:

  • pq=((pp)(qq))((pp)(qq))p\downarrow q = ((p|p)|(q|q))|((p|p)|(q|q))

Proofs of Indequacy

Similar to how a connective can be proven adequate, a connective can be proven
inadequate by showing that one or more of the standard connectives cannot be
implemented using the connective.

Example: Show that the set of connectives S={}S=\{\land\} is inadequate.

We shall show that ¬\lnot cannot be implemented using the set SS. Note that a
formula of one variable, say pp, containing only the connectives in SS is such
that if a truth valuation sets pt=0p^t = 0, then the formula itself must evaluate
to 0.

So, for it to be possible to define the standard ¬p\lnot p using SS, there
needs to exist a formula AA with the only propositional variable being pp and
the only connective being \land such that ¬p ⁣A\lnot p \models\!\mid A. But as
mentioned before, if pt=0p^t = 0, then At=0A^t = 0 too, so ¬p\lnot p and AA cannot
be tautologically equivalent.

Formal Deducibility

So far, the methods studied for proving arguments are using truth tables and the
semantics (tautological equivalence,  ⁣\models\!\mid). We now desire to replace
these methods with a purely syntactic one, with rules of deduction that are
entirely syntactic.

We now define a relation called formal deducibility, denoted by \vdash,
which is intuitively the same as tautological equivalence, but has a different
method of proving the validity of an argument which can be done
mechanically/syntactically.

The word "formal" in this context means that the method is only concerned with
the syntactic form of an argument. Proofs do not refer to any semantic
properties and can be verified mechanically.

The system of proof being used in this document is the system of Natural
Deduction, which is far from the only system of formal proof.

Conventions: Here are some conventions to be followed to make notation
easier.

  • Formal deducibility is a relationship between a set of formulae (called the
    premises) and a formula AA (called the conclusion).
  • The symbol \vdash denotes the relationship of formal deducibility between a
    set and a formula. So ΣA\Sigma \vdash A asserts that AA is formally deducible
    from the set Σ\Sigma.
  • If {A1,,An}\{A_1,\dots,A_n\} is a set of formulae, for notational convenience, the
    curly braces may be elided. So saying A1,,AnA_1,\dots,A_n is equivalent.
  • If Σ{A}\Sigma\cup \{A\} is a set, where AA is a formula, then set-notational
    syntax may be elided. So saying Σ,A\Sigma, A is equivalent.
  • If Σ\Sigma and Σ\Sigma^* are sets of formulae, then set notation syntax can
    be elided from the expression ΣΣ\Sigma\cup \Sigma^*. So Σ,Σ\Sigma,\Sigma^* is
    equivalent notation.

Natural Deduction

A proof is a finite sequence of the form ΣiAi\Sigma_i \vdash A_i. Such a
statement is called a sequent.

A proof is considered to be valid if every sequent in the sequence of sequents
in the proof is created from previous sequents according to a specified proof
rule. These rules shall be introduced soon.

Also, a theorem is a sequent that occurs in a valid proof (usually the last).

Now, here are the rules of natural deduction. Each of A,B,CA,B,C is any formula and
Σ\Sigma is any set of formulae.

  1. (Ref)\text{(Ref)} is the rule of Reflexivity. It says that AAA\vdash A is a
    theorem.

  2. (+)(+) is the rule of Addition of Premises. It states that if ΣA\Sigma \vdash A is a theorem, then so is Σ,ΣA\Sigma, \Sigma^* \vdash A.

  3. (¬+)(\lnot +) is the rule of ¬\lnot elimination. It states that if
    Σ,¬AB\Sigma, \lnot A \vdash B and Σ,¬A¬B\Sigma, \lnot A \vdash \lnot B are theorems,
    then ΣA\Sigma \vdash A is a theorem.

  4. ()(\rightarrow -) is the rule of \rightarrow elimination. It states
    that if ΣAB\Sigma \vdash A \rightarrow B and ΣA\Sigma \vdash A are theorems,
    then ΣB\Sigma \models B is a theorem.

  5. (+)(\rightarrow +) is the rule of \rightarrow introduction. It states
    that if Σ,AB\Sigma, A\vdash B is a theorem, then ΣAB\Sigma\vdash A\rightarrow B
    is a theorem.

  6. ()(\land -) is the rule of \land elimination. It states that if ΣAB\Sigma \vdash A\land B is a theorem, then ΣA\Sigma \vdash A and ΣB\Sigma \vdash B
    are theorems.

  7. (+)(\land +) is the rule of \land introduction. It states that if
    ΣA\Sigma \vdash A and ΣB\Sigma \vdash B are theorems, them ΣAB\Sigma\vdash A\land B is a theorem.

  8. ()(\lor -) is the rule of \lor elimination. It states that if Σ,AC\Sigma, A \vdash C and Σ,BC\Sigma, B \vdash C are theorems, then Σ,ABC\Sigma, A\lor B \vdash C is a theorem.

  9. (+)(\lor +) is the rule of \lor introduction. It states that if
    ΣA\Sigma\vdash A is a theorem, then ΣAB\Sigma\models A\lor B and ΣBA\Sigma\vdash B\lor A are theorems.

  10. a) (    )(\iff -) is the rule of     \iff elimination. It states that if ΣA    B\Sigma \vdash A\iff B and ΣA\Sigma \vdash A are theorems, then ΣB\Sigma \vdash B
    is a theorem.

    b) (    )(\iff -) is the rule of     \iff elimination. It states that if ΣA    B\Sigma \vdash A\iff B and ΣB\Sigma \vdash B are theorems, then ΣA\Sigma \vdash A
    is a theorem.

  11. (    +)(\iff +) is the rule of     \iff introduction. It states that if
    Σ,AB\Sigma, A \vdash B and Σ,BA\Sigma, B \vdash A are theorems, then ΣA    B\Sigma \vdash A\iff B is a theorem.

Example: This is called the membership rule. It states that if AΣA\in \Sigma, then ΣA\Sigma\vdash A is a theorem.

The membership rule, when used, will now be cited as ()(\in).
This now shows how to prove a theorem using the method of formal deduction.

Proof: Assume that AΣA\in \Sigma and let Σ=Σ{A}\Sigma^* = \Sigma \setminus \{A\}. So Σ,A=ΣA=Σ\Sigma^*, A = \Sigma^* \cup A = \Sigma.

1.AA(Ref)2.A,ΣAby (+) on 1\begin{aligned} &1.\quad A \vdash A &(\text{Ref})\\ &2.\quad A,\Sigma^* \vdash A &\text{by (+) on 1}\\ \blacksquare \end{aligned}

In the above proof, line 1 is generated using the (Ref)\text{(Ref)} rule and this
is cited to its right. Then, line 2 is generated using line 1, by the (+)(+)
rule which is the addition of premises and this is cited to its right again.
These citations are important because they provide justification for each step.
Note that citations can only be made to previous lines in the proof.
These steps provide a formal proof for the statement ΣA\Sigma \vdash A and so
it is now a theorem.

Example: Prove that AB,BCACA\rightarrow B, B\rightarrow C \vdash A \rightarrow C.
Here is a formal proof for this statement.

1.AB,BC,AAB()2.AB,BC,AA()3.AB,BC,BB() on 1,24.AB,BC,ABC()5.AB,BC,AC() on 4,36.AB,BCAC(+) on 5\begin{aligned} &1.\quad A\rightarrow B, B\rightarrow C, A \vdash A\rightarrow B &(\in)\\ &2.\quad A\rightarrow B, B\rightarrow C, A \vdash A &(\in)\\ &3.\quad A\rightarrow B, B\rightarrow C, B \vdash B &(\rightarrow -)\text{ on 1,2}\\ &4.\quad A\rightarrow B, B\rightarrow C, A \vdash B\rightarrow C &(\in)\\ &5.\quad A\rightarrow B, B\rightarrow C, A \vdash C &(\rightarrow -)\text{ on 4,3}\\ &6.\quad A\rightarrow B, B\rightarrow C \vdash A \rightarrow C &(\rightarrow +) \text{ on 5} \end{aligned}

In this proof, the membership rule, ()(\in), was used as a lemma. This is
possible because the rules of natural deduction and theorems are just schemas
for a proof. So instead of executing each of the steps in the proof of the
lemma, it suffices to cite the lemma, which is a theorem.

Note that in the above proof, it was also possible to reorder the lines, as long
as they all cite previous lines.

Some Intuition

  • (¬)(\lnot -) is the proof schema for an indirect proof/a proof by
    contradiction. If a contradiction, which, in the rules, is BB and ¬B\lnot B
    can be deduced from the set Σ\Sigma, with an additional false premise (one
    that does not hold), ¬A\lnot A, then this proposition is deducible from the
    set: ΣA\Sigma \vdash A.
  • ()(\lor -) is the proof schema for a proof by cases. If CC follows from AA
    and CC also follows from BB, then CC follows from ABA\lor B.
  • (+)(\rightarrow +) is the proof schema used to prove an implication. In order
    to prove ABA\rightarrow B with a certain set of premises i.e. to prove
    ΣAB\Sigma \vdash A\rightarrow B, one should assume AA (by adding it to the
    premises) and try to prove BB i.e. Σ,AB\Sigma, A \vdash B.

Definition: Here is a complete formal definition for formal
deducibility
. A formal deduction system such as Natural Deduction is specified
by a set of deduction rules.

A formula AA is formally deducible from Σ\Sigma (denoted as ΣA\Sigma \vdash A) iff ΣA\Sigma \vdash A is generated by a finite numeber of applications of
the rules of the system. So ΣA\Sigma \vdash A holds iff there exists a finite
sequence:

(1)Σ1A1(n)ΣnAn\begin{aligned} (1) \Sigma_1 &\vdash A_1\\ &\vdots\\ (n) \Sigma_n &\vdash A_n \end{aligned}

such that the following are satisfied:

  • Each sequent ΣiAk\Sigma_i \vdash A_k (for 1kn1\leq k \leq n) is generated by one
    rule of formal deduction from the sequents that preceed it
  • The final sequent ΣnAn\Sigma_n \vdash A_n is the desired ΣA\Sigma\vdash A i.e.
    Σn=Σ\Sigma_n = \Sigma and An=AA_n = A.

The above sequence of sequents is called a formal proof of its last sequent
i.e. ΣnAn\Sigma_n \vdash A_n.

Both formal deducibility and Form(Lp)Form(L^p) are constructed inductively and one can
observe that formulae correspond to schemes of proof and the rules of formation
correspond to the rules of formal deduction.

ΣA\Sigma \models A VS ΣA\Sigma \vdash A: There is a difference between
consequence and formal deducibility.

  • Tautological consequence is concerned with semantics while formal
    deducibility is concerned with syntax (Remeber: syntax is to form while
    semantics is to meaning).
  • The connection between tautological consequence and implications is that
    "ABA\models B iff ABA\rightarrow B is a tautology".
  • The connection between formal deducibility and implications is that "ABA\vdash B iff AB\emptyset\vdash A\rightarrow B".