On this page:
1 Purpose
2 Outline
3 Minimal propositional logic
4 Truth and falsehood
5 Propositional Logic
6 Tactics
8.11

Lecture 34: Proofs with Propositional Logic

1 Purpose

Explore interactive proofs more using minimal propositional logic; go over solution to HW12A.

2 Outline

But we aren’t really here to learn about typed functional programming, no matter how rich. Our goal is to prove properties of programs. So how do we get from a typed functional programming language to theorem proving?

3 Minimal propositional logic

We’ll start with a simple problem, perhaps the simplest: how to express, and prove theorems about, minimal propositional logic. That is, the language that only has logical variables and implications. This will allow us to become familiar with what it means to express theorems and prove them, and understand what that means!

In Lean, as we’ve said, everything has a type, including logical statements, or propositions. The type of propositions is Prop, and we can declare that we have arbitrary ones using variables:

variable (P Q R : Prop)

Here, we’ve declared three variables that all have type Prop. We can read this as "consider arbitrary propositions P, Q, R".

variables works for any type; we could write "consider arbitrary natural numbers, or arbitrary lists, or arbitrary graphs, etc".

Now, Lean is both a programming language and a tool for expressing and proving theorems. It could have added separate rules for logic: implication as an arrow \Rightarrow, and then logical rules for introduction (abstraction) and elimination (modus ponens), and many more rules for different logical constructions, but it turns out that the programming language is rich enough to express the logic. In the context of minimal propositional logic, rather than having a separate implication arrow, we can re-use the same arrow \rightarrow that we use for functions.

Thus Lean makes no distinction between a function that takes P and produces Q and a theorem that, given P, yields Q. This is partly made possible because function arrows in Lean are dependent: the output type can depend on the value of the input type. This is quite important for theorems, as, e.g., in the example we had last time. x is the value of the input, and it is used in the output type (a proof about equality). It can also be useful for programming, though we will not explore this much: e.g., you might encode invariants of a data structure in the types, and the output could then relate in sophisticated ways to the input.

If a function arrow is an implication, then what is a proof of such a statement? i.e., we can write down the following theorem:

variable (P Q R : Prop)

theorem T : (P -> Q) -> P -> Q

Which we can read as: given a theorem from P to Q and a proof of P, we can prove Q. The proof is a convincing argument that one can proceed as described. A particularly convincing argument is a procedure that given the inputs, produces the output. What procedure could that be? Again, we can appeal to our understanding of this as a type in a programming language. The procedure is a program with the given type. We can complete the proof as follows:

variable (P Q R : Prop)

theorem T : (P -> Q) -> P -> Q :=
  fun Hpq p => Hpq p

i.e., a proof is a term that inhabits the type that is the theorem. Here, it is a function that takes Hpq, p and returns Hpq applied to p.

We could write exactly the same thing in LSL (or ISL+):

-- ; T' : (P Q) [P -> Q] P -> Q
(define (T' Hpq Hp) (Hpq Hp))

Note that the type had (P -> Q) taken as an argument, returning a function from P to Q. For compactness, we wrote the term as a function of two arguments, rather than a series of functions of one argument, but the latter would be equivalent:

variable (P Q R : Prop)

theorem T : (P -> Q) -> P -> Q :=
  fun Hpq => fun p => Hpq p

We could also write this without the anonymous function, as:

variable (P Q R : Prop)

def T (Hpq : P -> Q) (p : P) : Q := Hpq p

This same dichotomy between functions defined with or without lambda is present in LSL and ISL+, where the following two are identical:

(define (myfun x y) (+ x y))
(define myfun (lambda (x y) (+ x y)))

As an exercise, construct a term for the following theorem:

variable (P Q R : Prop)
theorem T' : (P -> Q -> R) -> Q -> P -> R

Last time, we introduced Lean, the notion of proof, and how Lean serves as both a programming language and a proof assistant at the same time. Today, we’re going to go into more detail about what Truth is, how it relates to Proof, and continue on our journey exploring what we can express in Lean, moving from Minimal Propositional Logic to full Propositional Logic.

4 Truth and falsehood

In propositional logic, a formula or statement is valid if for all assignments of variables, the statement evaluates to T. In PBT and Rosette, true statements are those for which we have not yet found a counter-example (thus, they are only possibly true). Sometimes, with Rosette, our inputs are finite and thus we can exhaustively test, but likely, we made a finite approximation of our actual problem, and only exhaustively tested that approximation.

We can be a lot more precise in Lean, but we also have to be a lot more careful: precision comes with a cost that small mistakes could render a so-called theorem not always true.

So let’s talk about how to represent truth and falsehood in Lean. These will be our second encounter with data definitions in Lean (after Nat), also called inductive types. Inductive types are an important enough part of the system that the theory that underlies Lean is called the Calculus of Inductive Constructions. There is a lot more to them, but for now, will focus on a couple important examples. So what are true and false?

Well, it turns out there are two different versions, and the difference is very important. First, we have the ordinary, boolean data definition that should be familiar from any number of programming languages, though perhaps written in an unfamiliar notation.

inductive Bool : Type where
  | false
  | true

Let’s read this, carefully. First, it says we are declaring a new inductive type called Bool. In Lean, everything has a type, including types, which is what the : Type is saying: the type of Bool is Type. This data definition has two cases: false and true.

Aside from the syntax, this should be very familiar. And indeed, we have familiar operations on type Bool: and, or, not, if, etc. However, in Lean, we have a separate, much more important definition of truth and falsehood that exists for a very important reason. As we mentioned above, in theorem provers that subscribe to the types-as-theorems idea that originated in AUTOMATH, we should not be able to prove false! One of the oldest statements from logic is ex falso quodlibet. Translated as "from falsehood, anything follows", the idea is that if you have a proof of false, you can prove any theorem. As a result, any logical system in which you can prove false would not be very useful. That means if we use our type Bool as our definition of logical truth and falsehood, we are in trouble, as we can easily "prove" false: we just use the constructor for false.

Why is this not an issue in PBT or with Rosette? Essentially, our only reasoning is based on evaluation and assertions (often, of equalities). Our only theorems are individual test cases, and their proofs are based on evaluation, which must terminate. We thus do not have the principle of explosion (ex falso quodlibet), along with just about any other logical reasoning! Indeed, we rarely have proofs: we simply have searches for counter-examples, though in certain cases it can be exhaustive (and thus form a proof).

So how should we define logical truth and falsehood? Truth is the following:

inductive True : Prop where
| intro

i.e., True is a inductive type that has type Prop that has a single constructer, intro. Why Prop and not Type? Lean distinguishes between types that are used for theorems, which have type Prop, and types that are used for computation, which have type Type. The actual situation is actually a bit more complicated, as there is an entire hierarchy of types, that exist to avoid circularities in the logic, but generally we’ll be able to avoid worrying about that.

Why is True a type that has a single constructor? Well, logical truth is essentially trivial: it tells you nothing, on its own, and thus you can always construct a proof of it (you don’t need to know anything).

Logical falsehood ends up being a lot more important. How do we define it?

inductive False : Prop

This type is much more baffling, at first. It is a type, that has type Prop, but it has no constructors. Why does in have no constructors? Because it is impossible to construct. This is exactly what we want out of our logic, which is our programming language! Thus, we can formally write the statement "from false follows anything", without worrying, since there should be no way of constructing a value of type False. Note the importance of the distinction between the types (the theorem statements) and the terms (the proofs). The type False can easily be written down, and indeed, it is a very important type! We will often want to express that certain cases are impossible. How will we do that? We will write that if such cases occur, False will result, and as a result, we need not do anything more in those cases (as they become trivial).

If, in a proof, we end up with a term of type False, by inspecting the value, we can determine that we must be in an impossible case, because such a value could never have been constructed, and thus no further work is needed.

5 Propositional Logic

Now that we have a way to describe truth and falsehood, let’s continue on with operators from propositional logic. We already have implication (which exists in minimal propositional logic) how about logical and?

structure And (a b : Prop) : Prop where
  /-- `And.intro : a → b → a ∧ b` is the constructor for the And operation. -/
  intro ::
  /-- Extract the left conjunct from a conjunction. `h : a ∧ b` then
  `h.left`, also notated as `h.1`, is a proof of `a`. -/
  left : a
  /-- Extract the right conjunct from a conjunction. `h : a ∧ b` then
  `h.right`, also notated as `h.2`, is a proof of `b`. -/
  right : b

And a separate infix notation definition:

infixr:35 " /\\ " => And

And is defined with structure, which is a wrapper on top of inductive that allows you to give names to the fields; in this case, left and right. It is shorthand for defining the inductive and defining the accessor functions.

To use it, we could write a statement like P /\ Q -> P as:

def and_1 (p : P  Q) := p.left

Despite the new syntax, this is entirely something you could have written in LSL!

(define-struct intro [a b])
(define-contract (And X Y) (Intro X Y))

(define (and_1 p)
  (intro-a p))
What is that argument p? Well, it has type P /\ Q, i.e., it is a conjunction of two propositions. That is its type: the term is simply a pair, and so we can use projections (.left) and .right) to extract out fields from the pair; in this case, the first field is the proposition of type P. We could also have used p.1.

So what this program does is, given a proof of a conjunction (a term of type P /\ Q), returns a proof of P (a term of that type).

Written in this style, both the theorem (the type) and the proof (the term) are somewhat mixed together.

In addition to logical and, we have logical or, defined as another inductive type:

inductive Or (a b : Prop) : Prop where
  | /-- `Or.inl` is "left injection" into an `Or`. If `h : a` then `Or.inl h : a ∨ b`. -/
    inl (h : a) : Or a b
  | /-- `Or.inr` is "right injection" into an `Or`. If `h : b` then `Or.inr h : a ∨ b`. -/
    inr (h : b) : Or a b
Which similarly has infix syntax:

infixr:30 " \\/ " => Or

Again, this could be defined in LSL:

(define-struct inl [a])
(define-struct inr [b])
(define-contract (Or X Y) (OneOf (Inl X) (Inr Y)))

Unlike And, logical or is an inductive type with two cases: Or.inl and Or.inr. This is a constructive view of logical disjunction: that if you have a proof of A \lor B, that should mean that you either have a proof of A or a proof of B. Thus, the two ways of constructing A \/ B are constructors that take terms of type A and B.

To use a disjunction, we end up having to consider the two cases: given a term (proof) of type A \/ B, we must handle the case that either we had a proof of A or we had a proof of B. This means that, unlike with And, there are no projections. If x : A \/ B, we don’t have x.1 : A and x.2 : B. Rather, we have two possibilities for x: it is either Or.inl a or Or.inr b, where a : A and b : B.

We can deal with that in code using pattern matching, as follows:

def or_1 (pq : P  Q) (hp : P  R) (hq : Q  R) : R :=
  match pq with
  | Or.inl p => hp p
  | Or.inr q => hq q

Here, we handle the two possibilities, and invoke the corresponding function (hp or hq) accordingly.

This is exactly analogous to how we program with itemizations in LSL. The same function could be defined with out data definition for Or above as:

(: or_1 (All (P Q R) (-> (Or P Q) (-> P R) (-> Q R) R)))
(define (or_1 pq hp hq)
  (cond [(inl? pq) (hp (inl-a pq))]
        [(inr? pq) (hq (inr-b pq))]))

Logical not is defined not directly, but by way of logical falsehood:

def Not (a : Prop) : Prop := a  False

As noted before, logical negation is defined in terms of False, and notation is provided as:

notation:max "¬" p:40 => Not p
#print Not -- def Not : Prop → Prop := fun a => a → False

That is, ¬P holds if P -> False. This is a constructive view of negation: the negation of P is true if, from P, you can construct a proof of False. It is also why we end up using False a lot in proofs, even while we rarely use True, since whenever we are reasoning about negation, we end up reasoning about False.

One obvious consequence of this is that the statement And P (Not P) is nonsense: a statement can never be both true and false. If we unfold the definition of Not, we see that what that statement is saying is that P -> False and P. Thus, we can project out the P -> False function and the P, and apply the first to the second and get a proof of False.

variable (P : Prop)
theorem F : (And P (Not P)) -> False :=
  fun andpnotp => andpnotp.2 andpnotp.1

Now that we have a proof of False (a term that has type False), we truly are in an impossible situation, as if you recall, there are no terms of type False. It is defined as an inductive type with no constructors:

inductive False : Prop

As a result, there is an elimination rule (a rule for using the term) that says, if we have a proof of False, we can conclude anything.

#check False.elim -- False.elim : False → ?m.2

For example, if we wanted to prove the statement 1 = 2 (which is a valid statement, but not provable), and we knew And P (Not P), we could do that as follows:

theorem T : (And P (Not P)) -> 1 = 2 :=
  fun andpnotp => False.elim (andpnotp.2 andpnotp.1)

i.e., we first pull out the two components of the conjunction, applying the second to the first, which gives us a term of type False. We can then use False.elim, which takes something of types False and returns anything! In particular, it will produce a proof of 1 = 2 in this case.

6 Tactics

Up to now, we’ve mostly been writing proofs as explicit terms, but that is something that is actually done quite rarely. It turns out, rather than constructing the terms directly, which for complicated proofs, might be rather involved, we can instead invoke higher-level tactics that will construct a term for us. This is the legacy of Milner’s LCF.

For the rest of this lecture, we’re going to focus primarily on the "Lean Infoview", and how it, along with tactics, allow us to much more easily, and interactively, construct proofs.

We can switch into "tactic mode" at any point using the keyword by. On their own, tactics can make our code harder to read, as they allow you to leave out details apparent in the terms, but the "Lean Infoview" provides interactive feedback that results in a really nice environment for constructing proofs. The most basic tactic is rfl, which performs evaluation on both sides, and then solves trivial equalities: equalities that are the exact same on both sides.

Tactic Use

Tactic Description

rfl

If the goal is an equality, complete the proof if the two sides are either the same, or can be made the same by evaluation.

Let’s see how this works:

example : 1 + 1 = 2 := by rfl

example creates anonymous (unnamed) theorems.

To learn more about tactics, let’s see a slightly more involved example, returning to our simple theorem about And:

def and_2'' : forall {P  Q : Prop}, P  Q -> Q :=
  by intros P Q _
     cases P  Q
     exact Q

Here, we use three tactics. The first, intros, takes identifiers from the goal and moves them into the premise of the proof. This corresponds, in the underlying term, to a function with identifiers. Here we give explicit names to the propositions P and Q, but giving no name to the conjunction, indicated by an underscore.

The Infoview will always have some number of goals (none if the proof is complete), and each one will have assumptions or premises (or hypotheses) that are available to use in the proof of that goal. The assumptions are separate from the obligation by a turnstile \vdash, and the goal is the proposition that we are trying to prove. This is all shown under the "Tactic state" header.

Tactic Use

Tactic Description

intros id1 id2 ...
or
intro id1

adds new hypothesis id1 : P1,id2 : P2, etc to premises, assuming goal is of the form P1 -> P2 -> P or x : P1, y : P2, P
The new goal will now be P
Note that if id is _, then Lean will generate a name for it.

We then do case analysis on one of my hypotheses: which one? The one we didn’t give a name to, and so we refer to it by its type, which Lean allows me to do using "french quotes" (typed as \f< and \f>): provided there is only one hypothesis with that type, this is not ambiguous, and it can be easier to read. Case analysis on a pair will then give me hypotheses for the two conjuncts. Note that it does not manipulate the goal in this case, but simply changes the form of the hypotheses. If we did case analysis on a disjunction (also known as a sum), we would get multiple goals, as there would be multiple different cases to consider. Case analysis in tactics corresponds to pattern matching in the underlying terms, though the Infoview state can make it a bit easier to manage.

Tactic Use

Tactic Description

P (typed as \f< P \f>)

if there is only one hypothesis of type P, this is a way of referencing it anonymously in other tactics

cases id

if id is a hypothesis of some inductive type, perform case analysis on it, either constructing multiple hypotheses (if it is a product type) or multiple goals (if it is a sum type).

At this point, we can complete the proof by saying we want the one with type Q, using the tactic exact. This is a tactic that takes a term, and if it matches the goal, it completes the proof. It is the way to go back into "term mode" from "tactic mode" (it is the inverse of by in this way). All of the proofs we did before we started using tactics could be done "in tactic mode" by using a single invocation of exact, though it would be a bit pointless.

Tactic Use

Tactic Description

exact term

if term is a term of the same type as the goal, then this tactic completes the proof.

There are many other ways we could write this. First, we could use the tactic assumption to complete the proof: it looks for a hypothesis that matches the goal.

def and_2''' :  {P  Q : Prop}, P  Q -> Q :=
  by intros P Q _
     cases P  Q
     assumption

Tactic Use

Tactic Description

assumption

if there is some hypothesis that has the same type as the goal, then this tactic completes the proof by using that hypothesis.

We also could have given an explicit name to the conjunction, and referred to that with cases:

def and_2'''' : forall {P  Q : Prop}, P  Q -> Q :=
  by intros P Q pq
     cases pq
     assumption

We can also be more precise with my case analysis. Conjunctions are created with And.intro, which has two fields, so we can pattern match to get:

def and_2''''' : forall {P  Q : Prop}, P  Q -> Q :=
  by intros P Q pq
     cases pq with | intro a b => exact b

This form of cases is very close to how match looks in the term language. When doing case analysis on something like And, knowing which case we are in doesn’t do much (as there is only one), but as we’ll see below, when doing case analysis on inductive types with multiple constructors (like Or), we will want to be able to identify the case of our proof. While we can write it in the matching style, we can also simply identify the case we want to work on with case:

def and_2'''''' : forall {P  Q : Prop}, P  Q -> Q :=
  by intros P Q pq
     cases pq
     case intro a b => exact b

This is very similar, but can be easier to read. In general, if there are multiple (named) goals we can switch to one of them with case, and we don’t actually have to bind any variables (they will be anonymous), so we could do this proof as:

def and_2''''''' : forall {P  Q : Prop}, P  Q -> Q :=
  by intros P Q pq
     cases pq
     case intro =>
      exact Q

Tactic Use

Tactic Description

case name id ... =>

if there is a goal called name, focus on it, and bind (optional) identifiers to parameters if available.