Assertion logic

Instructor's Guide


intro, types, verification, behavior, objects, composition, summary, Q/A, literature
Reasoning about program states based on the traces of a program may be quite cumbersome. Moreover, a disadvantage is that it relies to a great extent on our operational intuition of the effect of a program on a state. Instead,  [Hoare69] has proposed using an axiomatic characterization of the correctness properties of programming constructs. An axiomatic definition allows us to prove the correctness of a program with respect to the conditions stating its requirements by applying the appropriate inference rules.
  
  
  • P → R ∧{R} S {Q} ⇒ {P} S {Q}

  • R → Q ∧{P} S {R} ⇒ {P} S {Q}

  
  • m(x) → S(x) ∧{P} S(e) {Q} ⇒ {P} m(e) {Q}


slide: The correctness calculus

In slide 10-correctness correctness axioms have been given for assignment, sequential composition, conditional statements and iteration. These axioms rely on the side-effect free nature of expressions in the programming language. Also, they assume convertibility between programming language expressions and the expressions used in the assertion language. The assignment axiom states that for any post-condition Q we can derive the (weakest) pre-condition by substituting the value e assigned to the variable x for x in Q. This axiom is related to the weakest pre-condition calculus introduced by  [Dijkstra76]. It is perhaps the most basic axiom in the correctness calculus for imperative programs. As an example, consider the assignment   
  • assignment - {Q [x:=e]} x = e {Q}

  • composition - {P} S1 {R} ∧{R} S2 {Q} ⇒ {P} S1;S2 {Q}

  • conditional - ∧b} S {Q} ⇒ {P} if (b) S {Q}

  • iteration - ∧b} S {I} ⇒ {I} while (b) S ∧¬b}

  
  • P → R ∧{R} S {Q} ⇒ {P} S {Q}

  • R → Q ∧{P} S {R} ⇒ {P} S {Q}

  
  • m(x) → S(x) ∧{P} S(e) {Q} ⇒ {P} m(e) {Q}

and the requirement   
  • assignment - {Q [x:=e]} x = e {Q}

  • composition - {P} S1 {R} ∧{R} S2 {Q} ⇒ {P} S1;S2 {Q}

  • conditional - ∧b} S {Q} ⇒ {P} if (b) S {Q}

  • iteration - ∧b} S {I} ⇒ {I} while (b) S ∧¬b}

  
  • P → R ∧{R} S {Q} ⇒ {P} S {Q}

  • R → Q ∧{P} S {R} ⇒ {P} S {Q}

  
  • m(x) → S(x) ∧{P} S(e) {Q} ⇒ {P} m(e) {Q}

. Applying the assignment axiom we have   
  • assignment - {Q [x:=e]} x = e {Q}

  • composition - {P} S1 {R} ∧{R} S2 {Q} ⇒ {P} S1;S2 {Q}

  • conditional - ∧b} S {Q} ⇒ {P} if (b) S {Q}

  • iteration - ∧b} S {I} ⇒ {I} while (b) S ∧¬b}

  
  • P → R ∧{R} S {Q} ⇒ {P} S {Q}

  • R → Q ∧{P} S {R} ⇒ {P} S {Q}

  
  • m(x) → S(x) ∧{P} S(e) {Q} ⇒ {P} m(e) {Q}

. Consequently, when we are able to prove that P implies   
  • assignment - {Q [x:=e]} x = e {Q}

  • composition - {P} S1 {R} ∧{R} S2 {Q} ⇒ {P} S1;S2 {Q}

  • conditional - ∧b} S {Q} ⇒ {P} if (b) S {Q}

  • iteration - ∧b} S {I} ⇒ {I} while (b) S ∧¬b}

  
  • P → R ∧{R} S {Q} ⇒ {P} S {Q}

  • R → Q ∧{P} S {R} ⇒ {P} S {Q}

  
  • m(x) → S(x) ∧{P} S(e) {Q} ⇒ {P} m(e) {Q}

, we have, by virtue of the first consequence rule, proved that {P}  x = 3  {x = y}. The next rule, for sequential composition, allows us to break a program (fragment) into parts. For convenience, the correctness formulae for multiple program fragments that are composed in sequential order are often organized as a so-called proof outline of the form {P}  x = 3  {x = y}. When sufficiently detailed, proof outlines may be regarded as a proof. For example, the proof outline {s = i*(i+1)/2}  i = i + 1;  {s + i = i * (i+1)/2}  s = s + i;  {s = i*(i+1)/2} constitutes a proof for the invariance property discussed earlier. Clearly, the correctness formula for the two individual components can be proved by applying the assignment axiom. Using the sequential composition rule, these components can now be easily glued together. As a third rule, we have a rule for conditional statements of the form {s = i*(i+1)/2}  i = i + 1;  {s + i = i * (i+1)/2}  s = s + i;  {s = i*(i+1)/2}. As an example, consider the correctness formula {s = i*(i+1)/2}  i = i + 1;  {s + i = i * (i+1)/2}  s = s + i;  {s = i*(i+1)/2}. All we need to prove, by virtue of the inference rule for conditional statements, is that {s = i*(i+1)/2}  i = i + 1;  {s + i = i * (i+1)/2}  s = s + i;  {s = i*(i+1)/2} which (again) immediately follows from the assignment axiom. As the last rule for proving correctness, we present here the inference rule for iterative (while) statements. The rule states that whenever we can prove that a certain invariant I is maintained when executing the body of the while statement (provided that the condition b is satisfied) then, when terminating the loop, we know that both I and {s = i*(i+1)/2}  i = i + 1;  {s + i = i * (i+1)/2}  s = s + i;  {s = i*(i+1)/2} hold. As an example, the formula {s = i*(i+1)/2}  i = i + 1;  {s + i = i * (i+1)/2}  s = s + i;  {s = i*(i+1)/2} while ( i > 0) i--; i > 0 trivially follows from the while rule by taking I to be true. Actually, the while rule plays a crucial role in constructing verifiable algorithms in a structured way. The central idea, advocated among others by  [Gries], is to develop the algorithm around a well-chosen invariant. Several heuristics may be applied to find the proper invariant starting from the requirements expressed in the (output) predicate stating the post-condition. In addition to the assignment axiom and the basic inference rules related to the major constructs of imperative programming languages, we may use so-called structural rules to facilitate the actual proof of a correctness formula. The first structural (consequence) rule states that we may replace a particular pre-condition for which we can prove a correctness formula (pertaining to a program fragment S) by any pre-condition of which the original pre-condition is a consequence, in other words which is stronger than the original pre-condition. Similarly, we may replace a post-condition for which we know a correctness formula to hold by any post-condition that is weaker than the original post-condition. As an example, suppose that we have proved the formula {x \geqslant 0}  S  {x < 0} then we may, by simultaneously applying the two consequence rules, derive the formula {x > 0}  S  {x \leqslant 0} which amounts to strengthening the pre-condition and weakening the post-condition. The intuition justifying this derivation is that we can safely promise less and require more, as it were. Finally, the rule most important to us in the present context is the inference rule characterizing correctness under procedural abstraction. Assuming that we have a function m with formal parameter x (for convenience we assume we have only one parameter, but this can easily be generalized to multiple parameters), of which the (function) body consists of {x > 0}  S  {x \leqslant 0}. Now, moreover, assume that we can prove for an arbitrary expression e the correctness formula {P}  S(e)  {Q}, with e substituted for the formal parameter x in both the conditions and the function body, then we also have that {P}  S(e)  {Q}, provided that P and Q do not contain references to local variables of the function m. In other words, we may abstract from a complex program fragment by defining a function or procedure and use the original (local) correctness proof by properly substituting actual parameters for formal parameters. The procedural abstraction rule, which allows us to employ functions to perform correct operations, may be regarded as the basic construct needed to verify that an object embodies a (client/server) contract.