CS 641 meeting -*- Outline -*- * Homorphisms of simple commands and the induction rule Want more general proof rules for procedures that don't follow from the postulates: wg.h = wg.(body.h) Useful for partial correctness and incorrectness proofs. ** idea We're going to do something similar to what is done in denotational semantics Recall that in den. sem., you give meaning to recursive defs by first converting them to a generator -------------------- DENOTATIONAL SEMANTICS FOR RECURSION to give meaning to: fact n = if n = 0 then 1 else n * fact(n-1) eliminate recursion by abstraction: G fact n = if n = 0 then 1 else n * fact(n-1) -------------------- We're going to do something similar. We want to give a defintion of wlp and wp for recursive procs, and we'll do that by abstracting wlp and wp out of their definitions (for recursive procs). The things that are like wlp and wp will be called homomorphisms... ** homomorphims and simple commands (2.6) start of a yet more powerful formulation to deal with recursion. will abstract away from wp and wlp, to homomorphisms. *** homomorphisms allows more freedom of interpretation for commands (e.g., procs) ------------------- def: a function from commands to monotone predicate transformers, w, is a *homomorphism* iff w.(c;d) = w.c o w.d, for all c,d, and w.([] j \in J :: c.j).p = (all j \in J :: w.(c.j).p) -------------------- emphasize that w.c is monotone for example, wp and wlp are homomorphisms, by rules for ; and []. *** simple commands going to build the languge up in layers this gives a semantic framework, that we'll apply to Hesselink's language, but you could do to any language. ---------------------------- LAYERS OF DEFINITION recursive procedures ________________________________ closure of S over ; and [](S(*)) ________________________________ simple commands (S) ---------------------------- you can choose for simple commands anything with wp and wlp given. the closure is automatic, and then we'll see how to define recursive procedres in this framework **** S simple commands, with fixed interpretations. -------------- def: S is a set of *simple commands*. assume s \in S is not composition or choice between other commands -------------- all s \in S are atomic e.g., primitive ADT operations may be taken as simple commands in applications, one chooses a convenient set of simple commands **** S(*) closure of S -------------- abstract sytnax: s \in S = simple-command c \in S(*) = comp-choice-command c ::= s | c_1;c_2 | c_1[]c_2 --------------- S(*) is closure of S under sequential composition and nondeterminate choice. *** recovering wp and wlp ----------------------- def: WP = { w | w is a homomorphism, w.s = wp.s, s \in S } def: w \in WLP equiv (all s \in S :: w.s = wlp.s) /\ w is a homomorphism ----------------------- (recall these forms of set comprehension are equivalent) Thm: If c \in S(*), then (all w \in WP :: w.c = wp.c) Pf: by structural induction using homomorphism property. Thm: If c \in S(*), then (all w \in WLP :: w.c = wlp.c) Q: What is still missing? recursion! ** induction rules (2.7) *** induction rule idea If we're going to use the specification of a procedure to prove calls, it seems like we should be able to use use the spec of a recursive procedure for recursive calls; This is okay for partial correctness, but won't give us total correctness as in the following: proc h(x:item, var y: item) h(x,y) So this rule must be for partial, not total correcness -------------------- HOARE'S PARTIAL CORRECTNESS FOR PROCS p {h} q |- p {body.h} q _________________________ p {h} q -------------------- *** induction rule The problem is to work in standard logic, not to define a separate logic Q: why? so that we can use whatever logical means are at hand, and aren't limited by the tools of a specific Hoare logic elegance How to get rid of derivability (|-)? Recall that the above means if [p ==> wlp.h.q] ==> [p ==> wlp.(body.h).q] then [p ==> wlp.h.q] But this won't do as a definition of wlp.h for recursive procs, because it would be circular! The way out of the circularity is, as in denotational semantics, to abstract out wlp from the criterion part of this definition. To do this quantify over all w \in WLP ------------------------- INDUCTION RULE (SIMPLIFIED) Suppose for every w \in WLP [p ==> w.h.q] ==> [p ==> w.(body.h).q]. Then [p ==> wlp.h.q] ------------------------- This is gives a partial definition of wlp.h.q for procedures h, which may be recursive. Hoare's rule allows implicit quantification over free variables. So ... ------------------------- HOARE'S INDUCTION RULE (HESSELINK'S FORMULA 23) Suppose for every w \in WLP (all i :: [p.i ==> w.(h.i).(q.i)] ==> (all i :: [p.i ==> w.(body.(h.i)).(q.i)]). Then (all i :: [p.i ==> wlp.(h.i).(q.i)]) ------------------------- The antecedent of the implication is called the induction hypothesis. Q: where is the base case? Hesselink remarks that this will be proved correct by use of orderings on solutions to equations (a la denotational semantics) in which wlp is defined as the weakest solution for w \in WLP to the equation w: w.h = w.(body.h) (see chapter 4, esp. sections 4.4 and 4.9) *** example of proof with induction rule suppose v is an integer program variable Consider the following liberal specification ----------------------- proc h() { ext v! all i : i \in Integers : liberally pre v >= i, post v >= i } That is: [v >= i ==> wlp.h.(v>=i)] body.h = (skip [] v := v+2; h; v := v-1) ----------------------- h doesn't necessarily terminate, but seems like it should satisfy this specification, of not reducing the value of v. Remark: would be interesting to try to prove this with the old rule... Correctness proof: let h.i = h, p.i and q.i = v >= i Induction hypothesis is, for w \in WLP (all i :: [i \in Integers /\ (v >= i) ==> w.h.(v >= i)]) That is, (all i \in Integers :: [v >= i ==> w.h.(v >= i)]) Now need to prove that the induction hypothesis implies (all i \in Integers :: [v >= i ==> w.(body.h).(v >= i)]) w.(body.h).(v >= i) = {def of body.h} w.(skip [] v := v+2; h; v := v-1).(v >= i) = {w is a homomorphism} w.skip.(v >= i) /\ w.(v := v+2; h; v := v-1).(v >= i) = {defs of WLP and wlps for simple commands} v >= i /\ w.(v := v+2).(w.h.(v >= i+1)) <== {induction hypothesis, with i := i+1, and montonony of w.(v := v+2)} v >= i /\ w.(v := v+2).(v >= i+1) = {defs of WLP and wlp for assignment} v >= i /\ v+2 >= i+1 = {arithmetic} v >= i /\ v >= i-1 = {arithmetic} v >= i *** necessity rule necessity of preconditions, as opposed to sufficiency, which is useful for proofs of incorrectness Q: what does it mean to prove a procedure incorrect? ------------------------- NECESSITY RULE (HESSELINK'S FORMULA 26) Suppose for every w \in WP (all i :: [w.(h.i).(q.i) ==> p.i] ==> (all i :: [w.(body.(h.i)).(q.i) ==> p.i]). Then (all i :: [wp.(h.i).(q.i) ==> p.i]) ------------------------- Note that this works with WP instead of WLP Hesselink remarks that this will be proved correct by use of orderings on solutions to equations (a la denotational semantics) in which wp is defined as the strongest solution for w \in WP to the equation w: w.h = w.(body.h) (see chapter 4, esp. sections 4.4 and 4.9) *** example of proof with necessity rule Show that the procedure of the previous example, with body.h = (skip [] v := v+2; h; v := v-1) does not always terminate in every state. That is: [wp.h.true = false] It is equivalent to show [wp.h.true ==> false]. Proof: by the necessity rule, suffices to prove, for all w \in WP, [w.h.true ==> false] ==> [w.(body.h).true ==> false] equivalently [~w.h.true] ==> [~w.(body.h).true] The induction hypothesis is [~w.h.true]. ~w.(body.h).true = {definition of body.h} ~w.(skip [] v := v+2; h; v := v-1).true = {w is a homomorphism} ~(w.skip.true /\ w.(v := v+2; h; v := v-1).true) = {calculus, w is a homomorphism} ~w.skip.true \/ ~w.(v := v+2).(w.h.(w.(v := v-1).true)) = {w = wp for simple commands, wp of skip, assignment} ~true \/ ~w.(v := v+2).(w.h.true) = {calculus, induction hypothesis} ~w.(v := v+2).false = {w = wp for simple commands, wp of skip, assignment} ~false = {calculus} true Q: is h total?