I. Boolean Lattices (6.1) A. inference rules for Boolean Lattices (6.1) B. Truth Values (6.2) 1. operators 2. inference rules what's the correspondence between the Boolean operators and the lattice operators? ------------------------------------------ Two special properties: 1. it's totally ordered: ___________________________(==> linear) |- (t ==> t') \/ (t' ==> t) 2. we identify truth with theorem-hood: ____ (truth) |- T ------------------------------------------ 3. Derivations with Logical Connectives (6.3) We want to use our proof format for the Booleans. Q: The inference rules only talk about relations, how can we use them to prove a conjunction or disjunction? ------------------------------------------ DERIVATIONS WITH LOGICAL CONNECTIVES (6.3) _____________________ (T rule) |- t <==> (t <==> T) So we can prove a theorem Phi |- t by an outline of the form Phi |- t <==> {reason} T Justification for this: _________________{T rule} Phi |- t <==> T |- t <==> (t <==> T) ____________________________{substitution} Phi |- t We can also do a proof by contradiction using: ________________________ (F rule) |- !t <==> (t <==> F) ------------------------------------------ Q: If we can show (T ==> e) does that prove e? (yes, by T ==> rule) Q: If we can show e ==> F does that prove !e? (yes, by the ==> F rule) Q: If we can show (e ==> T) does that prove e? no, everything implies T Q: If we can show (F ==> e) does that prove e? no, everything is implied by F Q: How would you prove !t? can prove t <==> F or t ==> F (proof by contradiction) a. properties of truth and falsity ------------------------------------------ PROPERTIES OF TRUTH AND FALSITY WITH CONJUNCTION AND DISJUNCTION _________________ (T /\ rule) |- T /\ t <==> t _________________ (T \/ rule) |- T \/ t <==> T _________________ (F /\ rule) |- F /\ t <==> F _________________ (F \/ rule) |- F \/ t <==> t ------------------------------------------ Q: how would you prove the T \/ rule? |- T <==> { ==> antisymmetric } * T ==> { \/ introduction } T \/ t * T \/ t ==> { \/ elimination } * t ==> { T greatest } T * T ==> { ==> reflexive } T . T . T \/ t b. Two Valued Logic ___________________________ (Boolean cases) |- (t <==> T) \/ (t <==> F) Proof: |- (t <==> T) \/ ("t <==> F") <==> { <==> F rule } (t <==> T) \/ !t <==> { <==> T rule } (t <==> T) \/ !(t <==> T) <==> { exhaustiveness } T c. Modus Ponens Phi |- t Phi' |- t ==> t' ____________________________ (Modus Ponens) Phi \union Phi' |- t' Proof: assume Phi |- t and Phi' |- t ==> t'. Phi \union Phi' |- T ==> { first hypothesis } t ==> { second hypothesis } t' Q: What does this prove? Phi \union Phi' |- T ==> t' from which the result follows by the T ==> rule. So this is valid kind of proof. Note use of hypotheses. d. Practice See the file cohen.txt for a slower take on this... Q: Can you prove that \/ is idempotent? Q: Can you prove that \/ distributes over equivalence |- P \/ (Q <==> R) <==> P \/ Q <==> P \/ R ? Q: Can you prove the "golden rule": |- t /\ t' <==> t <==> t' <==> t \/ t' ? Q: Can you prove the following absorption laws (i) |- X \/ (X \/ Y) <==> X (ii) |- X /\ (X /\ Y) <==> X ? Q: Can you prove that |- P ==> Q <==> P \/ Q <==> Q ? Q: Can you prove that |- X ==> Y <==> X /\ Y <==> X ? 4. Quantifiers (6.4) Quantifiers are introduced as higher-order predicates! a. notation ------------------------------------------ ASCII NOTATION ``(\forall v :: t)'' means (\forall v * t) ``(\exists v :: t)'' means (\exists v * t) Note: always use parentheses around quantified expressions. ------------------------------------------ The ``::'' looks better than *. This notation is from Dijkstra, Cohen also uses it. ------------------------------------------ QUANTIFIERS (6.4) \forall: (\Sigma -> Bool) -> Bool \exists: (\Sigma -> Bool) -> Bool Notation: (\forall v :: t) = \forall (\ v . t) (\exists v :: t) = \exists (\ v . t) Bounded quantifiers: (\forall v : b : t) = (\forall v :: b ==> t) (\exists v : b : t) = (\exists v : b /\ t) Exists unique ^ (\exists! v :: t ) = (\exists v :: t /\ (\forall v' : t[v := v'] : v' = v)) ------------------------------------------ e.g., (\forall x : x > 0 : x/x = 1) b. semantics Q: What's \forall like in a lattice? \exists? meet and join (\forall x:T :: A.x) = /\ {A.x | x:T } ------------------------------------------ UNIVERSAL QUANTIFICATION RULES Introduction/Generalization: Phi |- s ==> t __________________ (\forall introduction) Phi |- s ==> (\forall v :: t) * v is not free in s or Phi Elimination/Specialization/Instantiation: ___________________ (\forall elimination) Phi |- (\forall v :: t) ==> t[v := t'] * t' is free for v in t ------------------------------------------ The side condition for \forall introduction is often phrased as "v is fresh" Q: What do these mean intuitively? Q: Is this rule valid: Phi |- (\forall v :: t) _______________________ (universal instantiation) Phi |- t[v := t'] * t' is free for v in t ? Yes, can prove it using modus ponens Q: What does the side condition mean? That there won't be any capture of free variables in t' Q: What happens if t' is not free for v in t? e.g., consider z == 7 |- (\forall x :: x > 3) ==> { improper use of universal instantiation } // wrong! (x > 3)[x := z] <==> { def. of substitution } "z" > 3 <==> { by assumption, z == 7 } 7 > 3 <==> { arithmetic } T ------------------------------------------ EXISTENTIAL QUANTIFICATION Introduction: __________________ (\exists introduction) Phi |- t[v := t'] ==> (\exists v :: t) * t' is free for v in t Elimination: Phi |- t ==> s ___________________ (\exists elimination) Phi |- (\exists v :: t) ==> s * v is not free in s or Phi ------------------------------------------ Q: What do these mean intuitively? Q: What happens if the side conditions don't hold? Can get a contradiction z == 7 |- T ==> { == is reflexive } 3 == 3 <==> { definition of substitution } (v == 3)[v := 3] ==> { \exists introduction, 3 is free for v in v == 3 } (\exists v :: v == 3) ==> { improper use of \exists elimination } // wrong! * z == 3 ==> { ==> reflexive } z == 3 . "z" ==3 <==> { by assumption, z == 7 } 7 == 3 <==> { arithmetic } F so we have "proved" that T ==> F, using a bad version of \exists elimination c. derived rules (table 6.3) Q: Can you prove DeMorgan's laws (in table 6.3)? See also the file cohen-quantifiers.txt in this directory, especially for rules regarding bounded quantifiers. CS 641 Lecture -*- Outline -*- II. Derivations Proof techniques, not specific to boolean lattices A. Assumptions (6.5) Assumptions and implications are related A sequent Phi |- t says that t follows from Phi. Assumptions are just a way to work with implications. ------------------------------------------ RULES FOR ASSUMPTIONS (6.5) Phi, t |- t' ________________ (discharge) Phi |- t ==> t' Phi |- t ==> t' ________________ (undischarge) Phi, t |- t' ------------------------------------------ Q: How would you prove the assume axiom, Phi, t |- t ? use reflexivity and undischarge (in a derivation) Q: How does this let you use lemmas? Phi |- t Phi', t |- t' ________________________ (use of lemmas, or cut rule) Phi \union Phi' | t' proof uses modus ponens and discharging ------------------------------------------ CASE ANALYSIS Phi, t |- t' Phi', not.t |- t' __________________________(case analysis) Phi \union Phi' | t' Proof. Assume Phi, t |- t' and Phi', not.t |- t'. Phi \union Phi' |- T <==> { not exhaustion } t \/ not.t ==> { \/ elimination } * t ==> { undischarge, assumption } t' * not.t ==> { undischarge, assumption } t' . t' ------------------------------------------ Note the mix of <==> and ==> mix yields the composition of the relations involved. Ok if all arrows go the same way. ------------------------------------------ MONOTONICITY AND ALPHA Phi |- t ___________ (add assumption) Phi, t' |- t Phi, s |- t _______________(change of free variables) Phi, s[x := x'] |- t[x :- x'] * x is not free in Phi, and x' is free for x in s and t ------------------------------------------ Q: Why do these make sense? B. Derivations with local assumptions (6.6) 1. focusing rules for connectives The reason for using the focusing rules instead of just substituting equals for equals is to get the extra assumption. ------------------------------------------ FOCUSING RULES UNDER EQUIVALENCE Focusing on a conjunct: Phi, t |- t1 <==> t2 ___________________________ Phi |- t /\ t1 <==> t /\ t2 Example: |- x = 3 /\ "y = x" <==> { focus on conjunct } * [ x = 3 ] y = x <==> { by assumption, x = 3 } y = 3 . x = 3 /\ y = 3 ------------------------------------------ Q: why is this justified? because both conjuncts must be true ------------------------------------------ FOCUSING RULES FOR OTHER CONNECTIVES UNDER EQUIVALENCE Focusing on a disjunct: Phi, not.t |- t1 <==> t2 ___________________________ Phi |- t \/ t1 <==> t \/ t2 Focusing on consequent of implication: Phi, t |- t1 <==> t2 _____________________________ Phi |- t ==> t1 <==> t ==> t2 Focusing on antecedent of an implication: Phi, not.t |- t1 <==> t2 _____________________________ Phi |- t1 ==> t <==> t2 ==> t ------------------------------------------ Q: How would you explain these? Q: Can you prove that p ==> ((p ==> q) ==> q) ? |- p ==> ("(p ==> q) ==> q") <==> { focus on conclusion of ==> } * [ p ] (p ==> q) ==> q <==> { by assumption p <==> T } "(T ==> q)" ==> q <==> { T ==> rule } q ==> q <==> { ==> reflexive } T . p ==> T <==> { T greatest } T 2. focusing and monotonicity ------------------------------------------ FOCUSING AND MONOTONICITY (UNDER RELATIONS) For monotonic functions we have: Phi |- t <= t' __________________ Phi |- f.t <= f.t' * if f is monotonic Can manipulate t separately. Works inside quantifiers, since they are monotonic with respect to implication. Example, prove: |- (\exists v * v = t /\ t') ==> t'[v := t] * if v is not free in t ------------------------------------------ Note, no extra hypothesis here. Q: What happens if f is antimonotonic? Proof: Assume v is not free in t. |- (\exists v * "v = t /\ t'") <==> { substitute using local assumption v = t } // note, no subderivation! (\exists v * "v = t /\ t'[v := t]") ==> { /\ elimination in monotonic context } (\exists v * "t'[v := t]")in the <==> { drop vacuous quantifier, exercise 6.8; v is not free in t } t'[v := t] Note the abbreviation where the comment is.code ------------------------------------------ FOCUSING UNDER IMPLICATION Focusing on a conjunct (exercise 6.7a) Phi, t |- t1 ==> t2 _____________________________ Phi |- t /\ t1 ==> t /\ t2 Focusing on a disjunct (exercise 6.7b) Focusing on the consequent of an implication (exercise 6.7b) Focusing on the antecedent of an implication (exercise 6.7b) ------------------------------------------ the formulation is the dual of the rule for conjunction Q: What's a monotonic context? Q: What parts of an implication are monotonic contexts? 3. focusing and local definitions ------------------------------------------ USING LOCAL DEFINITIONS AS ASSUMPTIONS Phi, v = t |- t1 ~ t2 __________________________ Phi |- (let v = t in t1) ~ (let v = t in t2) where ~ is an arbitrary relation ------------------------------------------ See procedure example on p. 123 Get to use definition as another assumption in subproof. CS 641 Lecture -*- Outline -*- III. An algebraic presentation of the booleans operators, following Cohen This follows chapter 2 of Cohen's book "Programming in the 1990s", Springer-Verlag, 1990. The ideas are originally due to Dijkstra. It has the advantage that, being more algebraic, you use reasoning about equivalence more often than antisymmetry proofs in practice. A. The equivalence (<==>) Cohen makes the following postulates for the equivalence. ------------------------------------------ POSTULATES FOR EQUIVALENCE for all Boolean predicates P, Q, R <==> is equality for Booleans: |- (P <==> Q) = (P = Q) <==> is symmetric: |- (P <==> Q) <==> (Q <==> P) <==> is associative: |- ((P <==> Q) <==> R) <==> (P <==> (Q <==> R)) <==> has unit true: |- (P <==> true) <==> P ------------------------------------------ The first postulate is also assumed (see p. 111) by Back and von Wright. This allows us to substitute equals for equals when we have an equivalence. This should really be formulated as a definition ^ (x <==> y) = (x = y) where (<==>) : Bool -> Bool -> Bool From this definition, symmetry of equivalence follows trivially. Lemma (<==> is symmetric): (P <==> Q) = (Q <==> P) Proof: |- P <==> Q = { definition of <==> } P = Q = { = is symmetric } Q = P = { definition of <==> } Q <==> P QED Corollary. |- (P <==> Q) <==> (Q <==> P) Proof: Use the definition of <==> and the above lemma. QED Now we prove associativity for the equivalence, using two lemmas. Lemma. ((P <==> Q) <==> R) ==> (P <==> (Q <==> R)) Proof: We assume (P <==> Q) <==> R in the calculation below. (P <==> Q) <==> R |- P <==> { ==> antisymmetric } * P ==> { discharge } * [ P ] Q <==> R <==> { <==> T rule, to get the shape of the main assumption } ("T" <==> Q) <==> R <==> { by local assumption, T <==> P } (P <==> Q) <==> R <==> { by assumption, ((P <==> Q) <==> R) <==> T } T . Q <==> R * Q <==> R ==> { discharge } [ Q <==> R ] T <==> { by assumption, T <==> (P <==> Q) <==> R } (P <==> Q) <==> "R" <==> { substitute equals for equals, we are assuming Q <==> R } (P <==> Q) <==> Q ==> { case analysis } * [ Q ] (P <==> "Q") <==> "Q" <==> { by assumption, Q <==> T, twice } (P <==> T) <==> T <==> { <==> T rule, twice } P * [ !Q ] (P <==> "Q") <==> "Q" <==> { by assumption (and <==> F rule), Q <==> F, twice } (P <==> F) <==> F <==> { <==> F rule } !("P <==> F") <==> { <==> F rule } !(!P) <==> { double negation (ex. 6.4) } P . P . P . (Q <==> R) QED Lemma. (P <==> (Q <==> R)) ==> (P <==> Q) <==> R) Proof: We assume P <==> (Q <==> R) in the calculation below. P <==> (Q <==> R) |- (P <==> Q) <==> { ==> antisymmetric } * R ==> { discharge } * [ R ] P <==> "Q" <==> { <==> T rule } P <==> (Q <==> "T") <==> { by local assumption, T <==> R, } P <==> (Q <==> R) <==> { by assumption, (P <==> (Q <==> R)) <==> T } T . P <==> Q * P <==> Q ==> { discharge } * [ P <==> Q ] T ==> { by assumption } P <==> (Q <==> R) <==> { substitute equals for equals, we are assuming P <==> Q } Q <==> (Q <==> R) ==> { case analysis } * [ Q ] "Q" <==> ("Q" <==> R) <==> { by assumption, Q <==> T, twice } T <==> (T <==> R) <==> { <==> T rule, twice } R * [ !Q ] "Q" <==> ("Q" <==> R) <==> { by assumption, Q <==> F, twice } F <==> (F <==> R) <==> { <==> F rule } !("F <==> R") <==> { <==> F rule } !(!R) <==> { double negation (ex. 6.4) } R . R . R . R QED Corollary (<==> is associative): ((P <==> Q) <==> R) <==> (P <==> (Q <==> R)) Proof: This follows by antisymmetry of ==> and the preceding two lemmas. Some more basic properties of equivalence. Lemma (<==> has unit T): (P <==> T) <==> P Proof: |- ("P <==> T") <==> P <==> { <==> is symmetric } (T <==> P) <==> P <==> { <==> is symmetric } P <==> (T <==> P) <==> { <==> T rule } T QED Corollary (<==> is reflexive): (P <==> P) <==> T Proof: This follows by use of symmetry. Now we have the lemma of mutual implication. Lemma (mutual implication). (A <==> B) <==> (A ==> B) /\ (B ==> A) Proof: |- (A ==> B) /\ (B ==> A) <==> { ==> antisymmetric } * (A ==> B) /\ (B ==> A) ==> { ==> antisymmetric } * (A ==> B) /\ (B ==> A) ==> { /\ elimination } (A ==> B) * (A ==> B) /\ (B ==> A) ==> { /\ elimination } (B ==> A) . A <==> B * A <==> B ==> { /\ introduction } * A <==> B ==> { discharge } * [ A <==> B ] ("A" ==> B) <==> { assumption } (B ==> B) <==> { ==> reflexive } T . (A ==> B) * A <==> B ==> {discharge } * [ A <==> B ] (B ==> "A") <==> { assumption } (B ==> B) <==> { ==> reflexive } T . (B ==> A) . (A ==> B) /\ (B ==> A) . A <==> B QED Note: From now on we omit all appeals to symmetry and associativity for <==>. B. disjunction (\/) Cohen makes the following postulates for disjunctions ------------------------------------------ POSTULATES FOR DISJUNCTION disjunction is associative and symmetric disjunction is idempotent: |- P \/ P <==> P disjunction distributes over equivalence: |- P \/ (Q <==> R) <==> P \/ Q <==> P \/ R ------------------------------------------ We start by proving these as lemmas. We use a lemma to capture the common reasoning behind the two (symmetric) parts of the proof of symmetry. Lemma. |- P \/ Q ==> Q \/ P Proof: |- P \/ Q ==> { case analysis } * [ P ] "P" \/ Q <==> { by assumption P <==> T } T \/ Q <==> { T \/ rule } T <==> { by assumption T <==> P } P ==> { \/ introduction (the second rule on p. 112) } Q \/ P * [ !P ] "P" \/ Q <==> { by assumption, P <==> F } F \/ Q <==> { F \/ rule } Q ==> { \/ introduction (the first rule on p. 112) } Q \/ P . Q \/ P QED Lemma (disjunction is symmetric). |- P \/ Q <==> Q \/ P Proof. |- P \/ Q <==> { ==> antisymmetric } * P \/ Q ==> { lemma above } Q \/ P * Q \/ P ==> { lemma above with P,Q := Q,P } P \/ Q . Q \/ P QED Lemma (disjunction is associative) |- (P \/ Q) \/ R <==> P \/ (Q \/ R) Proof. |- (P \/ Q) \/ R <==> { ==> antisymmetric } * (P \/ Q) \/ R ==> { case analysis } * [ R ] (P \/ Q) \/ "R" <==> { by assumption, R <==> T } (P \/ Q) \/ T <==> { T \/ rule, symmetry } T <==> { by assumption, T <==> R } R ==> { \/ introduction } Q \/ R ==> { \/ introduction } P \/ (Q \/ R) * [ !R ] (P \/ Q) \/ "R" <==> { by assumption, R <==> F } (P \/ Q) \/ F <==> { F \/ rule, symmetry } P \/ "Q" <==> { F \/ rule } P \/ (Q \/ "F") <==> { by assumption, F <==> R } P \/ (Q \/ R) . P \/ (Q \/ R) * P \/ (Q \/ R) ==> { case analysis } * [ P ] "P" \/ (Q \/ R) <==> { by assumption, P <==> T } T \/ (Q \/ R) <==> { T \/ rule, symmetry } T <==> { by assumption, T <==> P } P ==> { \/ introduction } P \/ Q ==> { \/ introduction } (P \/ Q) \/ R * [ !P ] "P" \/ (Q \/ R) <==> { by assumption, P <==> F } "(F \/ Q)" \/ R <==> { F \/ rule } "Q" \/ R <==> { F \/ rule } ("F" \/ Q) \/ R <==> { by assumption, F <==> P } (P \/ Q) \/ R . (P \/ Q) \/ R . P \/ (Q \/ R) QED Lemma (disjunction is idempotent): |- P \/ P <==> P Proof. |- P \/ P <==> { ==> antisymmetric } * P \/ P ==> { \/ elimination } * P ==> { ==> reflexive } P * P ==> { ==> reflexive } P . P * P ==> { \/ introduction } . P \/ P . P Lemma (disjunction distributes over equivalence): |- P \/ (Q <==> R) <==> { case analysis } * [ P ] "P" \/ (Q <==> R) <==> { by assumption, P <==> T } T \/ (Q <==> R) <==> { T \/ rule } T <==> { <==> is reflexive } "T" <==> "T" <==> { T \/ rule, twice } "T" \/ Q <==> "T" \/ R <==> { by assumption, T <==> P, twice } P \/ Q <==> P \/ R * [ !P ] "P" \/ (Q <==> R) <==> { by assumption, P <==> F } F \/ (Q <==> R) <==> { F \/ rule } "Q" <==> "R" <==> { F \/ rule, twice } "F" \/ Q <==> "F" \/ R <==> { by assumption, F <==> P, twice } P \/ Q <==> P \/ R . P \/ Q <==> P \/ R QED Here's an interesting lemma not in the Back and von Wright book, and a proof using these axioms. (From Cohen, p. 29) Lemma (disjunction distributes over itself): |- X \/ (Y \/ Z) <==> (X \/ Y) \/ (X \/ Z). Proof: We start with the more complex side. |- (X \/ Y) \/ (X \/ Z) <==> { associativity of \/ } X \/ Y \/ X \/ Z <==> { symmetry of \/ } "X \/ X" \/ Y \/ Z <==> { disjunction is idempotent } X \/ Y \/ Z <==> { associativity of \/ } X \/ (Y \/ Z) QED C. conjunction (/\) Cohen makes the following postulate for conjunction ------------------------------------------ POSTULATE FOR CONJUNCTION The "Golden Rule": |- P /\ Q <==> P <==> Q <==> P \/ Q ------------------------------------------ This makes sense, because <==> is associative. For example, you can parse it as a definition of conjunction: ^ P /\ Q = ((P <==> Q) <==> P \/ Q). Or think about what this means in terms of lattices. You may recall that we started trying to prove this in class, and wound up trying to prove some lemmas about it. I now think the most convincing proof of this involves a case analysis... Proof: We calculate as follows |- P <==> Q <==> P \/ Q <==> { ==> antisymmetric } * P <==> Q <==> P \/ Q ==> { case analysis } * [ P ] "P" <==> Q <==> "P" \/ Q <==> { by assumption, P <==> T, twice } T <==> Q <==> T \/ Q <==> { T <==> rule } Q <==> "T \/ Q" <==> { T <==> rule } Q <==> Q <==> { <==> is reflexive } T <==> { by assumption, T <==> P } P ==> { /\ introduction } P /\ Q * [ !P ] P <==> Q <==> "P" \/ Q <==> { by assumption, P <==> F } P <==> Q <==> "F \/ Q" <==> { F \/ rule } P <==> "Q <==> Q" <==> { <==> is reflexive } P <==> T <==> { T <==> rule } P ==> { /\ introduction } P /\ Q . P /\ Q * P /\ Q ==> { case analysis } * [ P ] "P" /\ Q <==> { by assumption, P <==> T } T /\ Q <==> { T /\ rule } Q <==> { T <==> rule } "T" <==> Q <==> { <==> is reflexive } P <==> P <==> Q <==> { symmetry } P <==> Q <==> "P" ==> { \/ introduction, focusing lemma } P <==> Q <==> P \/ Q * [ !P ] "P" /\ Q <==> { by assumption, P <==> F } F /\ Q <==> { F /\ rule } F <==> { by assumption, F <==> P } P <==> { T <==> rule (and symmetry), to get an equivalence } P <==> "T" <==> { <==> is reflexive, to get closer to the desired shape } P <==> Q <==> Q <==> { F \/ rule, to introduce a disjunct } P <==> Q <==> "F" \/ Q <==> { by assumption, F <==> P } P <==> Q <==> P \/ Q . P <==> Q <==> P \/ Q . P /\ Q QED Now we can prove various other properties of conjunction. Note that now we are starting to use proofs of equivalences where we don't use an antisymmetry argument; these arguments are now buried in the proofs of the lemmas above. Lemma (conjunction is symmetric): |- P /\ Q <==> Q /\ P Proof: This is the kind of thing where it would suffice to say, "this is a corollary of the golden rule and symmetry of equivalence and disjunction." But here it is... |- P /\ Q <==> { golden rule } "P <==> Q" <==> P \/ Q <==> { <==> is symmetric } Q <==> P <==> P \/ Q <==> { \/ is symmetric } Q <==> P <==> Q \/ P <==> { golden rule } Q /\ P QED Lemma (conjunction is associative). |- (P /\ Q) /\ R <==> P /\ (Q /\ R) Proof. |- (P /\ Q) /\ R <==> { golden rule } ("P /\ Q") <==> R <==> ("P /\ Q") \/ R <==> { golden rule, twice } (P <==> Q <==> P \/ Q) <==> R <==> "(P <==> Q <==> P \/ Q) \/ R" <==> { disjunction distributes over equivalence } P <==> Q <==> P \/ Q <==> R <==> P \/ R <==> Q \/ R <==> P \/ Q \/ R <==> { <==> is associative } P <==> Q <==> R <==> Q \/ R <==> "P \/ Q <==> P \/ R <==> P \/ Q \/ R" <==> { disjunction distributes over equivalence } P <==> ("Q <==> R <==> Q \/ R") <==> P \/ ("Q <==> R <==> Q \/ R") <==> { golden rule, twice } P <==> (Q /\ R) <==> P \/ (Q /\ R) <==> { golden rule } P /\ (Q /\ R) QED Lemma (conjunction is idempotent). |- X /\ X <==> X Proof. This has a nice proof using the golden rule (Cohen, p. 31), which avoids the use of antisymmetry. Proof: |- X /\ X <==> { golden rule } X <==> X <==> X \/ X <==> { \/ is idempotent } X <==> "X <==> X" <==> { <==> is reflexive } X <==> T <==> { T <==> rule } X QED The following is an immediate corollary. Corollary (conjunction distributes over itself). |- X /\ (Y /\ Z) <==> (X /\ Y) /\ (X /\ Z) Note that the conjunction does not distribute over equivalence. Instead, there is the following: Lemma |- X /\ (Y /\ Z) <==> X /\ Y <==> X /\ Z <==> X. I leave the proof of this to you (see Cohen p. 32). Here are the "absorption laws" (i) |- X \/ (X /\ Y) <==> X (ii) |- X /\ (X \/ Y) <==> X Let's prove the second of these (Cohen p. 33), I leave the other to you. |- X /\ (X \/ Y) <==> { golden rule, with P,Q := X, X \/ Y } X <==> X \/ Y <==> "X \/ X" \/ Y <==> { idempotence of \/ } X <==> "X \/ Y <==> X \/ Y" <==> { <==> is reflexive } X <==> T <==> { T <==> rule } X D. implication (==>) Cohen makes the following postulate for implication ------------------------------------------ POSTULATE FOR IMPLICATION Correspondence between \/ and ==> : |- P ==> Q <==> P \/ Q <==> Q ------------------------------------------ We can prove this as a lemma (note the connection to correspondence, as on p. 34 of Back and von Wright). Lemma (correspondence between \/ and ==>). |- P ==> Q <==> P \/ Q <==> Q Proof. |- P ==> Q <==> { case analysis * [ P ] "P" ==> Q <==> { by assumption, P <==> T } T ==> Q <==> { T ==> rule } Q <==> { T <==> rule, to get a <==> in the formula } "T" <==> Q <==> { T \/ rule, to get an \/ in the formula } "T" \/ Q <==> Q <==> { by assumption, T <==> P } P \/ Q <==> Q * [ !P ] "P" ==> Q <==> { by assumption, P <==> F } F ==> Q <==> { F ==> rule } T <==> { <==> is reflexive, to get <==> and Q's in the formula } "Q" <==> Q <==> { F \/ rule, to get an \/ in the formula } "F" \/ Q <==> Q <==> { by assumption, F <==> P } P \/ Q <==> Q . P \/ Q <==> Q QED The dual of the above correspondence is another useful characterization of implication. Lemma (correspondence between /\ and ==>). |- X ==> Y <==> X /\ Y <==> X Proof (Cohen p. 35) |- X /\ Y <==> X <==> { golden rule (divided as (P /\ Q <==> P) <==> (Q <==> P \/ Q) } Y <==> X \/ Y <==> { correspondence between \/ and ==> } X ==> Y QED Here are some other properties of implication you may have fun proving for yourself (see Cohen pp. 35-36). The names are my own. Lemma (currying). |- X ==> (Y ==> Z) <==> X /\ Y ==> Z The above is also called uncurrying when used from right to left. Lemma (modus ponens as a formula). |- X /\ (X ==> Y) <==> X /\ Y Lemma (transitivity as a formula). |- (X ==> Y) /\ (Y ==> Z) ==> X ==> Z Lemma (implication cases). |- (X ==> Y) \/ (Y ==> Z) We've already shown mutual implication above. Lemma. |- X <==> Y <==> X \/ Y ==> X /\ Y E. consequence (<==) Unlike Back and von Wright, Cohen gives a slightly different definition of the consequence operator: ------------------------------------------ CONSEQUENCE (REVERSE IMPLICATION) ^ P <== Q = P \/ Q <==> P ------------------------------------------ But it's a trivial matter of using the definitions to show the following, which is equivalent to Back and von Wright's definition. Corollary (consequence formula). |- (X <== Y) <==> (Y ==> X) F. negation (!) Cohen gives the following postulates for negation ------------------------------------------ POSTULATES FOR NEGATION Definition of negation: |- !(P <==> Q) <==> !P <==> Q Excluded middle: |- P \/ !P Definition of F: |- F <==> !T ------------------------------------------ The law of excluded middle is called "not exhaustion" in Back and von Wright. The "definition of F" is a lemma in homework problem 6.4). This leaves us with the first postulate, which we state as a lemma. Lemma (definition of negation). |- !(P <==> Q) <==> !P <==> Q Proof. |- !(P <==> Q) <==> { <==> F rule } (P <==> Q) <==> F <==> { <==> is associative and symmetric } (P <==> F) <==> Q <==> { <==> F rule } !P <==> Q QED Here are some lemmas that are not in Back and von Wright. We leave most of the proofs to you (see Cohen p. 37-40). Lemma (transfer of negation). |- !X <==> Y <==> X <==> !Y Proof. |- !X <==> Y <==> { definition of negation } !(X <==> Y) <==> { definition of negation } X <==> !Y QED Lemma (DeMorgan's laws). (i) |- !X /\ !Y <==> !(X \/ Y) (ii) |- !X \/ !Y <==> !(X /\ Y) Lemma (Complement rules). (i) |- X \/ (!X /\ Y) <==> X \/ Y (ii) |- X /\ (!X \/ Y) <==> X /\ Y Lemma (Shuffle, a.k.a shunting). |- (X /\ Y) ==> Z <==> (X ==> !Y \/ Z) Lemma. |- X <==> !X <==> false G. discrepancy (<=!=>) This relation is the complement of equivalence. It's defined as follows ------------------------------------------ THE DISCREPANCY (LOGICAL INEQUALITY) Definition of discrepancy: ^ P <=!=> Q = !(P <==> Q) ------------------------------------------ From this we can prove the following lemmas, which Cohen gives as additional postulates. Lemma (<=!=> is symmetric). |- P <=!=> Q <==> Q <=!=> P Proof. This is immediate from the definition and symmetry of <==>. |- P <=!=> Q <==> { definition of discrepancy } !("P <==> Q") <==> { <==> is symmetric } !(Q <==> P) <==> { definition of discrepancy } Q <=!=> P QED Lemma (discrepancy associates with equivalence). |- P <==> (Q <=!=> R) <==> (P <==> Q) <=!=> R Proof. |- P <==> (Q <=!=> R) <==> { definition of discrepancy } P <==> !(Q <==> R) <==> { transfer of negation lemma (above) } !P <==> (Q <==> R) <==> { <==> is associative } "!P <==> Q" <==> R <==> { definition of negation } !(P <==> Q) <==> R <==> { transfer of negation lemma (above) } (P <==> Q) <==> !R <==> { definition of discrepancy } (P <==> Q) <=!=> R QED Lemma (<=!=> is associative). |- (P <=!=> Q) <=!=> R <==> P <=!=> (Q <=!=> R) Proof. |- (P <=!=> Q) <=!=> R <==> { definition of discrepancy } !("(P <=!=> Q) <==> R") <==> { definition of discrepancy } !(!(P <==> Q) <==> R) <==> { transfer of negation lemma (above) } !("(P <==> Q) <==> !R") <==> { associativity of equivalence } !(P <==> ("Q <==> !R")) <==> { transfer of negation lemma (above) } !(P <==> ("!Q <==> R")) <==> { definition of discrepancy } !(P <==> (Q <=!=> R)) <==> { definition of discrepancy } P <=!=> (Q <=!=> R) QED Now we have the analogs of the <==> T and <==> F rules. We leave the proofs to you. Lemma (<=!=> F rule). |- X <=!=> F <==> X Lemma (<=!=> T rule). |- X <=!=> T <==> !X CS 641 Lecture -*- Outline -*- IV. An algebraic presentation of the quantifiers, following Cohen This follows chapter 3 of Cohen's book "Programming in the 1990s", Springer-Verlag, 1990. The ideas are originally due to Dijkstra, van Gastern, and Backhouse. Cohen abstracts to generalized expressions, and then specializes to universal and existential quantification. This is similar to Back and von Wright's use of the theory of complete lattices. But we skip all of that. ------------------------------------------ ASCII NOTATION ``(\forall i :: U.i)'' means (\forall i * U.i ) ``(\exists v :: U.i)'' means (\exists v * U.i) Note: always use parentheses around quantified expressions. CONVENTIONS We write U.i, to show that U depends on i (i.e., that U.i has free occurrences of i) Thus (U.i)[i := t] == U.t. We assume that Z has no free occurrences of i. ------------------------------------------ Note: all the theorems are here from Cohen's book, but it's not useful to just list them. Refer the students to the book or this file, and just work a few of their proofs. A. Universal quantification (3.2) This is generalized conjunction... Cohen, like Dijkstra, uses the notation (\forall if : R.i : U.i) for Back and von Wright's (\forall i | R.i * U.i). I think Cohen's notation works better in ASCII, so I'll use that below. ------------------------------------------ BOUNDED UNIVERSAL QUANTIFICATION Definition: bounded universal quantifier ^ (\forall i : R.i : U.i) = (\forall i :: R.i ==> U.i) Omitted range convention (\forall i :: U.i) <==> { T ==> rule } (\forall i :: "T ==> U.i") <==> { definition of bounded quantifier } (\forall i : T : U.i) ------------------------------------------ You can get a feel for this definition by proving the following simple lemmas. Lemma (trading). |- (\forall i : R.i : U.i) <==> (\forall i :: !(R.i) \/ U.i) Lemma (trading). |- (\forall i : R.i /\ S.i : U.i) <==> (\forall i : R.i : S.i ==> U.i) By the notation for quantifiers, alpha conversion also works for bounded quantifiers. Lemma (alpha conversion). If j is free for i in R.i and U.i, then |- (\forall i : R.i : U.i) <==> (\forall j : R.j : U.j) Proof. Assume j is free for i in R.i and U.i. We calculate as follows. |- (\forall i : R.i : U.i) <==> { definition of bounded quantifier } (\forall i :: R.i ==> U.i) <==> { definition of quantifier notation } \forall.(\ i . R.i ==> U.i) <==> { alpha conversion, j is free for i in R.i and U.i } \forall.(\ j . R.j ==> U.j) <==> { definition of quantifier notation } (\forall j :: R.j ==> U.j) <==> { definition of bounded quantifier } (\forall j : R.j : U.j) QED A corollary is the following. Corollary (dummy transformation rule). If j does not appear in R.i or U.i, and if f is invertible (i.e., not(i = j) <==> not(f.i = f.j)), then |- (\forall i : R.i : U.i) == (\forall j : R.(f.j) : U.(f.j)) Cohen calls the Back and von Wright's distribute /\ rule the "term rule" (for the universal quantifier, there is another for existentials). In hints, he refers to left to right use of the term rule as "splitting the term" and right to left use as "joining the term". Q: Dijkstra calls this rule "\forall distributes over /\"). Why is this correct? Lemma (term rule) |- (\forall i : R.i : P.i /\ Q.i) <==> (\forall i : R.i : P.i) /\ (\forall i : R.i : Q.i) Proof. Let z be a fresh variable. |- (\forall i : R.i : P.i) /\ (\forall i : R.i : Q.i) <==> { ==> antisymmetric } * (\forall i : R.i : P.i) /\ (\forall i : R.i : Q.i) ==> { \forall introduction, z is fresh } * (\forall i : R.i : P.i) /\ (\forall i : R.i : Q.i) <==> { definition of bounded quantifier, twice } "(\forall i :: R.i ==> P.i)" /\ "(\forall i :: R.i ==> Q.i)" ==> { \forall elimination, twice, z is fresh } ("R.z ==> P.z") /\ ("R.z ==> Q.z") <==> { lemma: ==> distributes over /\ (below) } R.z ==> (P.z /\ Q.z) . (\forall z :: R.z ==> (P.z /\ Q.z)) <==> { alpha conversion, i is free for z in the term } (\forall i :: R.i ==> (P.i /\ Q.i)) <==> { definition of bounded quantifiers } (\forall i : R.i : (P.i /\ Q.i)) * (\forall i : R.i : (P.i /\ Q.i)) <==> { definition of bounded quantifier } (\forall i :: "R.i ==> (P.i /\ Q.i)") <==> { lemma: ==> distributes over /\ (below) } (\forall i :: (R.i ==> P.i) /\ (R.i ==> Q.i)) ==> { /\ introduction } * (\forall i :: (R.i ==> P.i) /\ (R.i ==> Q.i)) ==> { \forall introduction, z is fresh } * (\forall i :: (R.i ==> P.i) /\ (R.i ==> Q.i)) ==> { \forall elimination, z is fresh } (R.z ==> P.z) /\ (R.z ==> Q.z) ==> { /\ elimination } (R.z ==> P.z) . (\forall z :: R.z ==> P.z) <==> { definition of bounded quantifier } (\forall z : R.z : P.z) <==> { alpha conversion, i is free for z in the term } (\forall i : R.i : P.i) * (\forall i :: (R.i ==> P.i) /\ (R.i ==> Q.i)) ==> { \forall introduction, z is fresh } * (\forall i :: (R.i ==> P.i) /\ (R.i ==> Q.i)) ==> { \forall elimination, z is fresh } (R.z ==> P.z) /\ (R.z ==> Q.z) ==> { /\ elimination } (R.z ==> Q.z) . (\forall z :: R.z ==> Q.z) <==> { definition of bounded quantifier } (\forall z : R.z : Q.z) <==> { alpha conversion, i is free for z in the term } (\forall i : R.i : Q.i) . (\forall i : R.i : P.i) /\ (\forall i : R.i : Q.i) . (\forall i : R.i : P.i /\ Q.i) QED Lemma (==> distributes over /\). For all R, P, Q: |- R ==> (P /\ Q) <==> (R ==> P) /\ (R ==> Q) Proof. Let R, P, and Q be given. Then we calculate as follows. |- ("R ==> P") /\ ("R ==> Q") <==> { lemma, X ==> Y <==> !X \/ Y, twice } (!R \/ P) /\ (!R \/ Q) <==> { \/ distributivity } !R \/ (P /\ Q) <==> { lemma, X ==> Y <==> !X \/ Y } R ==> (P /\ Q) QED Another useful rule is what Cohen calls the "range rule". In hints, left to right use of this is called is "splitting the range", and right to left is "joining the range". Lemma (range rule). |- (\forall i : P.i \/ Q.i : U.i) <==> (\forall i : P.i : U.i) /\ (\forall i : Q.i : U.i) Proof. |- "(\forall i : P.i : U.i)" /\ "(\forall i : Q.i : U.i)" <==> { definition of bounded quantifiers, twice } (\forall i :: P.i ==> U.i) /\ (\forall i : Q.i ==> U.i) <==> { joining the term } (\forall i :: "(P.i ==> U.i)" /\ "(Q.i ==> U.i)") <==> { lemma X ==> Y <==> !X \/ Y, twice } (\forall i :: (!(P.i) \/ U.i) /\ (!(Q.i) \/ U.i)) <==> { \/ distributivity } (\forall i :: (!(P.i) /\ !(Q.i)) \/ U.i) <==> { de Morgan } (\forall i :: (!(P.i \/ Q.i)) \/ U.i) <==> { lemma X ==> Y <==> !X \/ Y, twice } (\forall i :: (P.i \/ Q.i) ==> U.i) <==> { definition of bounded quantifiers } (\forall i : (P.i \/ Q.i) : U.i) QED Back and von Wright use the following rule for vacuous quantifiers, which follows directly from the side conditions and the introduction and elimination rules. Lemma (\forall vacuous). If v is not free in t, then |- (\forall v :: t) <==> t Proof. (This is Back and von Wright's exercise 6.8a) The following corresponds to the \forall-vacuous rule for bounded quantifiers. Note the extra side condition, and how it is found in the proof of the second case below. (The formal definition of this side condition was determined from the proof, since it's a bit unclear in Cohen's treatment.) Lemma (constant term rule). If (\forall i :: !(R.i)) <==> F and if i is not free in Z, then |- (\forall i : R.i : Z) <==> Z Proof. Suppose (\forall i :: R.i <=!=> F) and if i is not free in Z. Let z be a fresh variable. |- (\forall i : R.i : Z) <==> { definition of bounded quantifier } (\forall i :: R.i ==> Z) <==> { case analysis } * [ Z ] (\forall i :: R.i ==> "Z") <==> { by assumption, Z <==> T } (\forall i :: R.i ==> T) <==> { T greatest } (\forall i :: T) <==> { \forall vacuous } T <==> { by assumption, T <==> Z } Z * [ !Z ] (\forall i :: R.i ==> "Z") <==> { by assumption, Z <==> F } (\forall i :: "R.i ==> F") <==> { ==> F rule } (\forall i :: !(R.i)) <==> { by assumption (\forall i :: !(R.i)) <==> F } F <==> { by assumption F <==> Z } Z . Z QED A corollary is the following. Lemma (conjunction distributes over universal quantifier with non-empty range): If (\forall i :: !(R.i)) <==> F and if i is not free in Z, then |- (\forall i : R.i : Z /\ U.i) <==> Z /\ (\forall i : R.i : U.i) Proof. Assume (\forall i :: !(R.i)) <==> F and i is not free in Z. |- (\forall i : R.i : Z /\ U.i) <==> { splitting the term } "(\forall i : R.i : Z)" /\ (\forall i : R.i : U.i) <==> { constant term rule, assumption } Z /\ (\forall i : R.i : U.i) QED However, disjunction distributes over bounded universal quantifiers with no side condition. Lemma (disjunction distributes over universal quantifier) If i is not free in Z, then |- (\forall i : R.i : Z \/ U.i) <==> Z \/ (\forall i : R.i : U.i) Proof. Suppose i is not free in Z. The we calculate as follows. |- (\forall i : R.i : Z \/ U.i) <==> { case analysis * [ Z ] (\forall i : R.i : "Z" \/ U.i) <==> { by assumption, Z <==> T } (\forall i : R.i : "T \/ U.i") <==> { T \/ rule } (\forall i : R.i : T) <==> { definition of bounded quantifier } (\forall i :: "R.i ==> T") <==> { T greatest } (\forall i :: T) <==> { \forall vacuous } T <==> { T \/ rule } "T" \/ (\forall i : R.i : U.i) <==> { by assumption, T <==> Z } . Z \/ (\forall i : R.i : U.i) * [ !Z] (\forall i : R.i : "Z" \/ U.i) <==> { by assumption, Z <==> F } (\forall i : R.i : "F \/ U.i") <==> { F \/ rule } (\forall i : R.i : U.i) <==> { F \/ rule } "F" \/ (\forall i : R.i : U.i) <==> { by assumption, F <==> Z } . Z \/ (\forall i : R.i : U.i) . Z \/ (\forall i : R.i : U.i) QED Lemma (empty range rule): |- (\forall i : F : U.i) <==> T Proof. |- (\forall i : F : U.i) <==> { definition of bounded quantifier } (\forall i :: F ==> U.i) <==> { F ==> rule } (\forall i :: T) <==> { \forall vacuous } T QED Here are some other lemmas you may enjoy proving. Lemma (nesting rule): If j does not occur free in P.i, then |- (\forall i : P.i : (\forall j : Q.i.j : U.i.j)) <==> (\forall i :: (\forall j : P.i /\ Q.i.j : U.i.j)) Note: used from left to right this is called "nesting", and from right to left this is called "unnesting". Lemma (\forall one-point). If i is not free in Z, then |- (\forall i : i == Z : U.i) == U.Z Proof (this is Back and von Wright's exercise 6.9). Lemma (implication distributes over universal quantification). If i is not free in Z, then |- (\forall i : R.i : Z ==> U.i) <==> Z ==> (\forall i : R.i : U.i) Proof. (see p. 53 in Cohen's book) Back and von Wright also have the following rule: Lemma (distribute \forall over ==>). If i is not free in Z, then |- (\forall i :: U.i ==> Z) <==> (\exists i :: U.i) ==> Z Can you generalize the above to a rule with ranges included? B. existential quantification (3.3) This is generalized disjunction. Cohen takes the following as a postulate for defining existential quantifiers. Lemma (generalized de Morgan). |- (\exists i : R.i : U.i) <==> !(\forall i : R.i : !(U.i)) We also have the dual. Lemma (generalized de Morgan). |- (\forall i : R.i : U.i) <==> !(\exists i : R.i : !(U.i)) From these the meaning of the bounded quantifier notation for existentials emerges. Lemma (trading). |- (\exists i : R.i : U.i) <==> (\exists i :: R.i /\ U.i) We have duals of all of the lemmas above as well. Lemma (term rule). |- (\exists i : R.i : P.i \/ Q.i) <==> (\exists i : R.i : P.i) \/ (\exists i : R.i : Q.i) Lemma (range rule). |- (\exists i : P.i \/ Q.i : U.i) <==> (\exists i : P.i : U.i) \/ (\exists i : Q.i : U.i) Lemma (disjunction distributes over existentials with non-empty range). If (\exists i :: R.i), and if i is not free in Z, then |- (\exists i : R.i : Z \/ U.i) <==> Z \/ (\exists i : R.i : U.i) Lemma (conjunction distributes over existentials). If i is not free in Z, then |- (\exists i : R.i : Z /\ U.i) <==> Z /\ (\exists i : R.i : U.i) Lemma (empty range rule). |- (\exists i : F : U.i) <==> F Lemma (constant term rule). If (\exists i :: R.i), and if i is not free in Z, then |- (\exists i: R.i : Z) <==> Z Lemma (dummy transformation rule). If f is invertible (i.e., i != j <==> f.i != f.j), then |- (\exists i : R.i : U.i) <==> (\exists j : R.(f.j) : U.(f.j)) Lemma (nesting rule). |- (\exists i : P.i : (\exists j : Q.i.j : U.i.j)) <==> (\exists i,j : P.i /\ Q.i.j : U.i.j) Lemma (one-point rule). If i is not free in Z, then |- (\exists i : i == Z : U.i) <==> U.Z (See p. 122 for half of the proof.) Back and von Wright also have: Lemma (\exists vacuous). If i is not free in Z, then |- (\exists i :: Z) <==> Z (Proof is exercise 6.8b) Lemma (distribute \exists over ==>). If i is not free in Z, then |- (\exists i :: U.i ==> Z) <==> (\forall i :: U.i) ==> Z Lemma (distribute \exists over ==>). If i is not free in Z, then |- (\exists i :: Z ==> U.i) <==> Z ==> (\exists i :: U.i) Q: Can you generalize the rules for distribution over ==> to include the ranges? Q: Can you generalize the rules for introduction and elimination to include the ranges?