Part I: Propositional Logic

9. “… if and only if …”, Using Theorems

9.1  A historical example

The philosopher David Hume (1711-1776) is remembered for being a brilliant skeptical empiricist.  A person is a skeptic about a topic if that person both has very strict standards for what constitutes knowledge about that topic and also believes we cannot meet those strict standards.  Empiricism is the view that we primarily gain knowledge through experience, particular experiences of our senses. In his book, An Inquiry Concerning Human Understanding, Hume lays out his principles for knowledge, and then advises us to clean up our libraries:

When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.[11]

Hume felt that the only sources of knowledge were logical or mathematical reasoning (which he calls above “abstract reasoning concerning quantity or number”) or sense experience (“experimental reasoning concerning matter of fact and existence”).  Hume is led to argue that any claims not based upon one or the other method is worthless.

We can reconstruct Hume’s argument in the following way.  Suppose t is some topic about which we claim to have knowledge.  Suppose that we did not get this knowledge from experience or logic.  Written in English, we can reconstruct his argument in the following way:

We have knowledge about t if and only if our claims about t are learned from experimental reasoning or from logic or mathematics.

Our claims about t are not learned from experimental reasoning.

Our claims about t are not learned from logic or mathematics.

_____

We do not have knowledge about t.

What does that phrase “if and only if” mean?  Philosophers think that it, and several synonymous phrases, are used often in reasoning.  Leaving “if and only” unexplained for now, we can use the following translation key to write up the argument in a mix of our propositional logic and English.

P:  We have knowledge about t.

Q:  Our claims about t are learned from experimental reasoning.

R:  Our claims about t are learned from logic or mathematics.

And so we have:

P if and only if (QvR)

¬Q

¬R

_____

¬P

Our task is to add to our logical language an equivalent to “if and only if”.  Then we can evaluate this reformulation of Hume’s argument.

9.2  The biconditional

Before we introduce a symbol synonymous with “if and only if”, and then lay out its syntax and semantics, we should start with an observation.  A phrase like “P if and only if Q” appears to be an abbreviated way of saying “P if Q and P only if Q”.  Once we notice this, we do not have to try to discern the meaning of “if and only if” using our expert understanding of English.  Instead, we can discern the meaning of “if and only if” using our already rigorous definitions of “if”, “and”, and “only if”.  Specifically, “P if Q and P only if Q” will be translated “((Q→P)^(P→Q))”.  (If this is unclear to you, go back and review section 2.2.)  Now, let us make a truth table for this formula.

P     Q (Q → P)     (P → Q) ((Q→P)^(P→Q))
T     T T   T T
T     F T   F F
F     T F    T F
F     F T T

We have settled the semantics for “if and only if”.   We can now introduce a new symbol for this expression.  It is traditional to use the double arrow, “↔”.  We can now express the syntax and semantics of “↔”.

If Φ and Ψ are sentences, then

(Φ↔Ψ)

is a sentence.  This kind of sentence is typically called a “biconditional”.

The semantics is given by the following truth table.

Φ         Ψ Ψ)
T         T T
T         F F
F         T F
F         F T

One pleasing result of our account of the biconditional is that it allows us to succinctly explain the syntactic notion of logical equivalence.  We say that two sentences Φ and Ψ are “equivalent” or “logically equivalent” if Ψ) is a theorem.

9.3 Alternative phrases

In English, it appears that there are several phrases that usually have the same meaning as the biconditional.  Each of the following sentences would be translated as (P↔Q).

P if and only if Q.

P just in case Q.

P is necessary and sufficient for Q.

P is equivalent to Q.

9.4  Reasoning with the biconditional

How can we reason using a biconditional?  At first, it would seem to offer little guidance.  If I know that (P↔Q), I know that P and Q have the same truth value, but from that sentence alone I do not know if they are both true or both false.  Nonetheless, we can take advantage of the semantics for the biconditional to observe that if we also know the truth value of one of the sentences constituting the biconditional, then we can derive the truth value of the other sentence.  This suggests a straightforward set of rules.  These will actually be four rules, but we will group them together under a single name, “equivalence”:

(Φ↔Ψ)

Φ

_____

Ψ

and

(Φ↔Ψ)

Ψ

_____

Φ

and

(Φ↔Ψ)

¬Φ

_____

¬Ψ

and

(Φ↔Ψ)

¬Ψ

_____

¬Φ

What if we instead are trying to show a biconditional?  Here we can return to the insight that the biconditional (Φ↔Ψ) is equivalent to ((Φ→Ψ)^(Ψ→Φ)).  If we could prove both (Φ→Ψ) and (Ψ→Φ), we will know that (Φ↔Ψ) must be true.

We can call this rule “bicondition”.  It has the following form:

(Φ→Ψ)

(Ψ→Φ)

_____

(Φ↔Ψ)

This means that often when we aim to prove a biconditional, we will undertake two conditional derivations to derive two conditionals, and then use the bicondition rule.  That is, many proofs of biconditionals have the following form:

    \[ \fitchctx{ \subproof{\pline{\phi}}{ \ellipsesline\\ \pline{\psi} } \fpline{(\phi \lif \psi)}\\ \subproof{\pline{\psi}}{ \ellipsesline\\ \pline{\phi} } \fpline{(\psi \lif \phi)}\\ \pline{(\phi \liff \psi)} } \]

9.5  Returning to Hume

We can now see if we are able to prove Hume’s argument.  Given now the new biconditional symbol, we can begin a direct proof with our three premises.

    \[ \fitchprf{\pline[1.] {(P \liff (Q \lor R))} [premise]\\ \pline[2.]{\lnot Q} [premise]\\ \pline[3.]{\lnot R} [premise]\\ } { } \]

We have already observed that we think (QvR) is false because ¬Q and ¬R.  So let’s prove ¬(QvR).  This sentence cannot be proved directly, given the premises we have; and it cannot be proven with a conditional proof, since it is not a conditional.  So let’s try an indirect proof.  We believe that ¬(QvR) is true, so we’ll assume the denial of this and show a contradiction.

    \[ \fitchprf{\pline[1.] {(P \liff (Q \lor R))} [premise]\\ \pline[2.]{\lnot Q} [premise]\\ \pline[3.]{\lnot R} [premise]\\ } { \subproof{\pline[4.]{\lnot \lnot (Q \lor R)}[assumption for indirect derivation]}{ \pline[5.]{(Q \lor R)}[double negation, 4]\\ \pline[6.]{R}[modus tollendo ponens, 5, 2]\\ \pline[7.]{\lnot R}[repetition, 3] } \pline[8.]{\lnot (Q \lor R)}[indirect proof, 4-7]\\ \pline[9.]{\lnot P}[equivalence, 1, 8] } \]

Hume’s argument, at least as we reconstructed it, is valid.

Is Hume’s argument sound?  Whether it is sound depends upon the first premise above (since the second and third premises are abstractions about some topic t).  Most specifically, it depends upon the claim that we have knowledge about something just in case we can show it with experiment or logic.  Hume argues we should distrust—indeed, we should burn texts containing—claims that are not from experiment and observation, or from logic and math.  But consider this claim:  we have knowledge about a topic t if and only if our claims about t are learned from experiment or our claims about t are learned from logic or mathematics.

Did Hume discover this claim through experiments?  Or did he discover it through logic?  What fate would Hume’s book suffer, if we took his advice?

9.6  Some examples

It can be helpful to prove some theorems that make use of the biconditional, in order to illustrate how we can reason with the biconditional.

Here is a useful principle.  If two sentences have the same truth value as a third sentence, then they have the same truth value as each other.  We state this as (((P↔Q)^(R↔Q))→(P↔R)).  To illustrate reasoning with the biconditional, let us prove this theorem.

This theorem is a conditional, so it will require a conditional derivation.  The consequent of the conditional is a biconditional, so we will expect to need two conditional derivations, one to prove (P→R) and one to prove (R→P).  The proof will look like this.  Study it closely.

    \[ \fitchprf{} { \subproof{\pline[1.]{((P \liff Q) \land (R \liff Q))}[assumption for conditional derivation]}{ \pline[2.]{(P \liff Q)}[simplification, 1]\\ \pline[3.]{(R \liff Q)}[simplification, 1]\\ \subproof{\pline[4.]{P}[assumption for conditional derivation]}{ \pline[5.]{Q}[equivalence, 2, 4]\\ \pline[6.]{R}[equivalence, 3, 5]\\ } \pline[7.]{(P \lif R)}[conditional derivation, 4-6]\\ \subproof{\pline[8.]{R}[assumption for conditional derivation]}{ \pline[9.]{Q}[equivalence, 3, 8]\\ \pline[10.]{P}[equivalence, 2, 9]\\ } \pline[11.]{(R \lif P)}[conditional derivation, 8-10]\\ \pline[12.]{(P \liff R)}[bincondition, 7, 11]\\ } \pline[13.]{(((P \liff Q) \land (R \liff Q)) \lif (P \liff R))}[conditional derivation 1-12]\\ } \]

We have mentioned before the principles that we associate with the mathematician Augustus De Morgan (1806-1871), and which today are called “De Morgan’s Laws” or the “De Morgan Equivalences”.  These are the recognition that ¬(PvQ) and (¬P^¬Q) are equivalent, and also that ¬(P^Q) and (¬Pv¬Q) are equivalent.  We can now express these with the biconditional.  The following are theorems of our logic:

(¬(PvQ)↔(¬P^¬Q))

(¬(P^Q)↔(¬Pv¬Q))

We will prove the second of these theorems.  This is perhaps the most difficult proof we have seen; it requires nested indirect proofs, and a fair amount of cleverness in finding what the relevant contradiction will be.

    \[ \fitchprf{} { \subproof{\pline[1.]{\lnot (P \land Q)}[assumption for conditional derivation]}{ \subproof{\pline[2.]{\lnot(\lnot P \lor \lnot Q)}[assumption for indirect derivation]}{ \subproof{\pline[3.]{\lnot P}[assumption for indirect derivation]}{ \pline[4.]{(\lnot P \lor \lnot Q)}[addition, 3]\\ \pline[5.]{\lnot(\lnot P \lor \lnot Q)}[repeat, 2]\\ } \pline[6.]{P}[indirect derivation, 3-5]\\ \subproof{\pline[7.]{\lnot Q}[assumption for indirect derivation]}{ \pline[8.]{(\lnot P \lor \lnot Q)}[addition, 7]\\ \pline[9.]{\lnot(\lnot P \lor \lnot Q)}[repeat, 2]\\ } \pline[10.]{Q}[indirect derivation, 6-9]\\ \pline[11.]{(P \land Q)}[adjunction, 6, 10]\\ \pline[12.]{\lnot (P \land Q)}[repeat, 1]\\ } \pline[13.]{(\lnot P \lor \lnot Q)}[indirect derivation, 2-12]\\ } \pline[14.]{(\lnot (P \land Q) \lif (\lnot P \lor \lnot Q))}[conditional derivation, 1-13]\\ \subproof{\pline[15.]{(\lnot P \lor \lnot Q)}[assumption for conditional derivation]}{ \subproof{\pline[16.]{\lnot \lnot (P \land Q)}[assumption for indirect derivation]}{ \pline[17.]{(P \land Q)}[double negation 16]\\ \pline[18.]{P}[simplification, 17]\\ \pline[19.]{\lnot \lnot P}[double negation, 18]\\ \pline[20.]{\lnot Q}[modus tollendo ponens, 15, 19]\\ \pline[21.]{Q}[simplification, 17]\\ } \pline[22.]{\lnot (P \land Q)}[indirect derivation 16-21]\\ } \pline[23.]{((\lnot P \lor \lnot Q) \lif \lnot (P \land Q))}[conditional derivation, 15-22]\\ \pline[24.]{(\lnot (P \land Q)\liff (\lnot P \lor \lnot Q) )}[bicondition,14, 23]\\ } \]

9.7 Using theorems

Every sentence of our logic is, in semantic terms, one of three kinds.  It is either a tautology, a contradictory sentence, or a contingent sentence.  We have already defined “tautology” (a sentence that must be true) and “contradictory sentence” (a sentence that must be false).  A contingent sentence is a sentence that is neither a tautology nor a contradictory sentence.  Thus, a contingent sentence is a sentence that might be true, or might be false.

Here is an example of each kind of sentence:

(Pv¬P)

(P↔¬P)

P

The first is a tautology, the second is a contradictory sentence, and the third is contingent.  We can see this with a truth table.

P         ¬P (Pv¬P) (P↔¬P) P
T         F T F T
F         T T F F

Notice that the negation of a tautology is a contradiction, the negation of a contradiction is a tautology, and the negation of a contingent sentence is a contingent sentence.

¬(Pv¬P)

¬(P↔¬P)

¬P

P ¬P (Pv¬P) ¬(Pv¬P) (P↔¬P) ¬(P↔¬P)
T F T F F T
F T T F F T

A moment’s reflection will reveal that it would be quite a disaster if either a contradictory sentence or a contingent sentence were a theorem of our propositional logic.  Our logic was designed to produce only valid arguments.  Arguments that have no premises, we observed, should have conclusions that must be true (again, this follows because a sentence that can be proved with no premises could be proved with any premises, and so it had better be true no matter what premises we use).  If a theorem were contradictory, we would know that we could prove a falsehood.  If a theorem were contingent, then sometimes we could prove a falsehood (that is, we could prove a sentence that is under some conditions false).  And, given that we have adopted indirect derivation as a proof method, it follows that once we have a contradiction or a contradictory sentence in an argument, we can prove anything.

Theorems can be very useful to us in arguments.  Suppose we know that neither Smith nor Jones will go to London, and we want to prove, therefore, that Jones will not go to London.  If we allowed ourselves to use one of De Morgan’s theorems, we could make quick work of the argument.  Assume the following key.

P:  Smith will go to London.

Q:  Jones will go to London.

And we have the following argument:

    \[ \fitchprf{\pline[1.] {\lnot (P \lor Q)} [premise]\\ }{ \pline[2.]{(\lnot (P \lor Q) \liff ( \lnot P \land \lnot Q))}[theorem]\\ \pline[3.]{( \lnot P \land \lnot Q)} [equivalence, 2, 1]\\ \pline[4.]{ \lnot Q}[simplification, 3]\\ } \]

This proof was made very easy by our use of the theorem at line 2.

There are two things to note about this.  First, we should allow ourselves to do this, because if we know that a sentence is a theorem, then we know that we could prove that theorem in a subproof.  That is, we could replace line 2 above with a long subproof that proves (¬(P v Q)↔(¬P ^ ¬Q)), which we could then use.  But if we are certain that (¬(P v Q)↔(¬P ^ ¬Q)) is a theorem, we should not need to do this proof again and again, each time that we want to make use of the theorem.

The second issue that we should recognize is more subtle.  There are infinitely many sentences of the form of our theorem, and we should be able to use those also.  For example, the following sentences would each have a proof identical to our proof of the theorem (¬(P v Q)↔(¬P ^ ¬Q)), except that the letters would be different:

(¬(R v S) ↔ (¬R ^ ¬S))

(¬(T v U) ↔ (¬T ^ ¬U))

(¬(V v W) ↔ (¬V ^ ¬W))

This is hopefully obvious.  Take the proof of (¬(P v Q)↔(¬P ^ ¬Q)), and in that proof replace each instance of P with R and each instance of Q with S, and you would have a proof of (¬(R v S)↔(¬R ^ ¬S)).

But here is something that perhaps is less obvious.  Each of the following can be thought of as similar to the theorem (¬(P v Q)↔(¬P ^ ¬Q)).

(¬((P^Q) v (R^S))↔(¬(P^Q) ^ ¬(R^S)))

(¬(T v (Q v V))↔(¬T ^ ¬(Q v V))

(¬((Q↔P) v (¬R→¬Q))↔(¬(Q↔P) ^ ¬(¬R→¬Q)))

For example, if one took a proof of (¬(P v Q)↔(¬P ^ ¬Q)) and replaced each initial instance of P with (Q↔P) and each initial instance of Q with (¬R→¬Q), then one would have a proof of the theorem (¬((Q↔P) v (¬R→¬Q))↔(¬(Q↔P) ^ ¬(¬R→¬Q))).

We could capture this insight in two ways.  We could state theorems of our metalanguage and allow that these have instances.  Thus, we could take (¬(Φ v Ψ) ↔ (¬Φ ^ ¬Ψ)) as a metalanguage theorem, in which we could replace each Φ with a sentence and each Ψ with a sentence and get a particular instance of a theorem.  An alternative is to allow that from a theorem we can produce other theorems through substitution.  For ease, we will take this second strategy.

Our rule will be this.  Once we prove a theorem, we can cite it in a proof at any time.  Our justification is that the claim is a theorem.  We allow substitution of any atomic sentence in the theorem with any other sentence if and only if we replace each initial instance of that atomic sentence in the theorem with the same sentence.

Before we consider an example, it is beneficial to list some useful theorems.  There are infinitely many theorems of our language, but these ten are often very helpful.  A few we have proved.  The others can be proved as an exercise.

T1  (P v ¬P)

T2  (¬(P→Q) ↔ (P^¬Q))

T3  (¬(P v Q) ↔ (¬P ^ ¬Q))

T4  ((¬P v ¬Q) ↔ ¬(P ^ Q))

T5  (¬(P ↔ Q) ↔ (P ↔ ¬Q))

T6  (¬P → (P → Q))

T7  (P → (Q → P))

T8  ((P→(Q→R)) → ((P→Q) → (P→R)))

T9  ((¬P→¬Q) → ((¬P→Q) →P))

T10  ((P→Q) → (¬Q→¬P))

Some examples will make the advantage of using theorems clear.  Consider a different argument, building on the one above.  We know that neither is it the case that if Smith goes to London, he will go to Berlin, nor is it the case that if Jones goes to London he will go to Berlin.  We want to prove that it is not the case that Jones will go to Berlin.  We add the following to our key:

R:  Smith will go to Berlin.

S:  Jones will go to Berlin.

And we have the following argument:

    \[ \fitchprf{\pline[1.] {\lnot ((P \lif R) \lor (Q \lif S))} [premise]\\ }{ \pline[2.]{\brokenform{(\lnot ((P \lif R) \lor (Q \lif S)) \liff}{ \formula{( \lnot (P \lif R) \land \lnot (Q \lif S)))}}}[theorem T3]\\ \pline[3.]{( \lnot (P \lif R) \land \lnot (Q \lif S))} [equivalence, 2, 1]\\ \pline[4.]{ \lnot (Q \lif S)}[simplification, 3]\\ \pline[5.]{( \lnot (Q \lif S) \liff (Q \land \lnot S))} [theorem T2]\\ \pline[6.]{(Q \land \lnot S)}[equivalence, 5, 4]\\ \pline[7.]{\lnot S}[simplification, 6] } \]

Using theorems made this proof much shorter than it might otherwise be.  Also, theorems often make a proof easier to follow, since we recognize the theorems as tautologies—as sentences that must be true.

9.8  Problems

  1. Prove each of the following arguments is valid.
  1. Premises: ((P^Q) ↔ R), (P ↔ S), (S ^ Q). Conclusion: R.
  2. Premises:  (P ↔ Q). Conclusion:  ((P → Q) ^ (Q → P)).
  3. Premises: P, ¬Q. Conclusion: ¬(P ↔ Q).
  4. Premises:  (¬PvQ), (Pv¬Q). Conclusion:  (P ↔ Q).
  5. Premises:  (P ↔ Q), (R ↔ S). Conclusion:  ((P^R) ↔ (Q^S)).
  6. Premises:  ((PvQ) ↔ R), ¬(P ↔ Q). Conclusion:  R.
  7. Conclusion:  ((P ↔ Q) ↔ (¬P ↔ ¬Q)).
  8. Conclusion:  ((P → Q) ↔ (¬P v Q)).
  1. Prove each of the following theorems.
  1. T2
  2. T3
  3. T5
  4. T6
  5. T7
  6. T8
  7. T9
  8. ((P^Q) ↔ ¬(¬Pv¬Q))
  9. ((P→ Q) ↔ ¬(P^¬Q))
  1. Here are some passages from literature, philosophical works, and important political texts. Hopefully you recognize some of them. Find the best translation into propositional logic. Because these are from diverse texts you will find it easiest to make a new key for each sentence.
  1. “Neither a borrower nor a lender be.” (Shakespeare, Hamlet.)
  2. “My copy-book was the board fence, brick wall, and pavement.” (Frederick Douglass, Narrative of the Life of Frederick Douglass.)
  3. “The bourgeoisie has torn away from the family its sentimental veil, and has reduced the family relation to a mere money relation.” (Marx and Engels, The Communist Manifesto.)
  4. “The Senate shall chuse their other Officers, and also a President pro tempore, in the Absence of the Vice President, or when he shall exercise the Office of President of the United States.” (The Constitution of the United States.)
  5. “Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.” (The Constitution of the United States.)
  6. “Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness. Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.” (Charles Dickens, Great Expectations.)
  7. “Thou shalt get kings, though thou be none.” (Shakespeare, Macbeth.)
  8. “If a faction consists of less than a majority, relief is supplied by the republican principle, which enables the majority to defeat its sinister views by regular vote.” (Federalist Papers.)
  1. In normal colloquial English, write your own valid argument with at least two premises, at least one of which is a biconditional. Your argument should just be a paragraph (not an ordered list of sentences or anything else that looks like formal logic).  Translate it into propositional logic and prove it is valid.
  1. In normal colloquial English, write your own valid argument with at least two premises, and with a conclusion that is a biconditional. Your argument should just be a paragraph (not an ordered list of sentences or anything else that looks formal like logic).  Translate it into propositional logic and prove it is valid.
  1. Here is a passage from Aquinas’s reflections on the law, The Treatise on the Laws. Symbolize this argument and prove it is valid.

A law, properly speaking, regards first and foremost the order to the common good. Now if a law regards the order to the common good, then its making belongs either to the whole people, or to someone who is the viceregent of the whole people. And therefore the making of a law belongs either to the whole people or to the viceregent of the whole people.


[11] From Hume’s Enquiry Concerning Human Understanding, p.161 in Selby-Bigge and Nidditch (1995 [1777]).

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

A Concise Introduction to Logic Copyright © 2017 by Craig DeLancey is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book