On this page:
1 Purpose
2 Outline
3 Minimal propositional logic
4 Truth and falsehood
5 Propositional Logic
8.7

Lecture 14: Propositional logic

1 Purpose

Explore minimal & full propositional logic in lean

2 Outline

But we aren’t really here to learn about typed functional programming, no matter how rich. Our goal is to prove properties of programs. So how do we get from a typed functional programming language to theorem proving?

3 Minimal propositional logic

We’ll start with a simple problem, perhaps the simplest: how to express, and prove theorems about, minimal propositional logic. That is, the language that only has logical variables and implications. This will allow us to become familiar with what it means to express theorems and prove them, and understand what that means!

In Lean, as we’ve said, everything has a type, including logical statements, or propositions. The type of propositions is Prop, and we can declare that we have arbitrary ones using variables:

variable (P Q R : Prop)

Here, we’ve declared three variables that all have type Prop. We can read this as "consider arbitrary propositions P, Q, R".

variables works for any type; we could write "consider arbitrary natural numbers, or arbitrary lists, or arbitrary graphs, etc".

Now, Lean is both a programming language and a tool for expressing and proving theorems. It could have added separate rules for logic: implication as an arrow \Rightarrow, and then logical rules for introduction (abstraction) and elimination (modus ponens), and many more rules for different logical constructions, but it turns out that the programming language is rich enough to express the logic. In the context of minimal propositional logic, rather than having a separate implication arrow, we can re-use the same arrow \rightarrow that we use for functions.

Thus Lean makes no distinction between a function that takes P and produces Q and a theorem that, given P, yields Q. This is partly made possible because function arrows in Lean are dependent: the output type can depend on the value of the input type. This is quite important for theorems, as, e.g., in the example we had above. x is the value of the input, and it is used in the output type (a proof about equality). It can also be useful for programming, though we will not explore this much: e.g., you might encode invariants of a data structure in the types, and the output could then relate in sophisticated ways to the input.

If a function arrow is an implication, then what is a proof of such a statement? i.e., we can write down the following theorem:

variable (P Q R : Prop)

theorem T : (P -> Q) -> P -> Q

Which we can read as: given a theorem from P to Q and a proof of P, we can prove Q. The proof is a convincing argument that one can proceed as described. A particularly convincing argument is a procedure that given the inputs, produces the output. What procedure could that be? Again, we can appeal to our understanding of this as a type in a programming language. The procedure is a program with the given type. We can complete the proof as follows:

variable (P Q R : Prop)

theorem T : (P -> Q) -> P -> Q :=
  fun Hpq p => Hpq p

i.e., a proof is a term that inhabits the type that is the theorem. Here, it is a function that takes Hpq, p and returns Hpq applied to p.

We could write exactly the same thing in ISL+:

-- ; T' : (P Q) [P -> Q] P -> Q
(define (T' Hpq Hp) (Hpq Hp))

Note that the type had (P -> Q) taken as an argument, returning a function from P to Q. For compactness, we wrote the term as a function of two arguments, rather than a series of functions of one argument, but the latter would be equivalent:

variable (P Q R : Prop)

theorem T : (P -> Q) -> P -> Q :=
  fun Hpq => fun p => Hpq p

We could also write this without the anonymous function, as:

variable (P Q R : Prop)

def T (Hpq : P -> Q) (p : P) : Q := Hpq p

This same dichotomy between functions defined with or without lambda is present in ISL+, where the following two are identical:

(define (myfun x y) (+ x y))
(define myfun (lambda (x y) (+ x y)))

As an exercise, construct a term for the following theorem:

variable (P Q R : Prop)
theorem T' : (P -> Q -> R) -> Q -> P -> R

Last time, we introduced Lean, the notion of proof, and how Lean serves as both a programming language and a proof assistant at the same time. Today, we’re going to go into more detail about what Truth is, how it relates to Proof, and continue on our journey exploring what we can express in Lean, moving from Minimal Propositional Logic to full Propositional Logic.

4 Truth and falsehood

In propositional logic, a formula or statement is valid if for all assignments of variables, the statement evaluates to T. In PBT and Rosette, true statements are those for which we have not yet found a counter-example (thus, they are only possibly true). Sometimes, with Rosette, our inputs are finite and thus we can exhaustively test, but likely, we made a finite approximation of our actual problem, and only exhaustively tested that approximation.

We can be a lot more precise in Lean, but we also have to be a lot more careful: precision comes with a cost that small mistakes could render a so-called theorem not always true.

So let’s talk about how to represent truth and falsehood in Lean. These will be our second encounter with data definitions in Lean (after Nat), also called inductive types. Inductive types are an important enough part of the system that the theory that underlies Lean is called the Calculus of Inductive Constructions. We’ll get into them in a lot more detail soon, but for now, will focus on a couple important examples. So what are true and false?

Well, it turns out there are two different versions, and the difference is very important. First, we have the ordinary, boolean data definition that should be familiar from any number of programming languages, though perhaps written in an unfamiliar notation.

inductive Bool : Type where
  | false
  | true

Let’s read this, carefully. First, it says we are declaring a new inductive type called Bool. In Lean, everything has a type, including types, which is what the : Type is saying: the type of Bool is Type. This data definition has two cases: false and true.

Aside from the syntax, this should be very familiar. And indeed, we have familiar operations on type Bool: and, or, not, if, etc. However, in Lean, we have a separate, much more important definition of truth and falsehood that exists for a very important reason. As we mentioned above, in theorem provers that subscribe to the types-as-theorems idea that originated in AUTOMATH, we should not be able to prove false! One of the oldest statements from logic is ex falso quodlibet. Translated as "from falsehood, anything follows", the idea is that if you have a proof of false, you can prove any theorem. As a result, any logical system in which you can prove false would not be very useful. That means if we use our type Bool as our definition of logical truth and falsehood, we are in trouble, as we can easily "prove" false: we just use the constructor for false.

Why is this not an issue in PBT or with Rosette? Essentially, our only reasoning is based on evaluation and assertions (often, of equalities). Our only theorems are individual test cases, and their proofs are based on evaluation, which must terminate. We thus do not have the principle of explosion (ex falso quodlibet), along with just about any other logical reasoning! Indeed, we rarely have proofs: we simply have searches for counter-examples, though in certain cases it can be exhaustive (and thus form a proof).

So how should we define logical truth and falsehood? Truth is the following:

inductive True : Prop where
  | intro

i.e., True is a inductive type that has type Prop that has a single constructer, intro. Why Prop and not Type? Lean distinguishes between types that are used for theorems, which have type Prop, and types that are used for computation, which have type Type. The actual situation is actually a bit more complicated, as there is an entire hierarchy of types, that exist to avoid circularities in the logic, but generally we’ll be able to avoid worrying about that.

Why is True a type that has a single constructor? Well, logical truth is essentially trivial: it tells you nothing, on its own, and thus you can always construct a proof of it (you don’t need to know anything).

Logical falsehood ends up being a lot more important. How do we define it?

inductive False : Prop

This type is much more baffling, at first. It is a type, that has type Prop, but it has no constructors. Why does in have no constructors? Because it is impossible to construct. This is exactly what we want out of our logic, which is our programming language! Thus, we can formally write the statement "from false follows anything", without worrying, since there should be no way of constructing a value of type False. Note the importance of the distinction between the types (the theorem statements) and the terms (the proofs). The type False can easily be written down, and indeed, it is a very important type! We will often want to express that certain cases are impossible. How will we do that? We will write that if such cases occur, False will result, and as a result, we need not do anything more in those cases (as they become trivial).

If, in a proof, we end up with a term of type False, by inspecting the value, we can determine that we must be in an impossible case, because such a value could never have been constructed, and thus no further work is needed.

5 Propositional Logic

Now that we have a way to describe truth and falsehood, let’s continue on with operators from propositional logic. We already have implication (which exists in minimal propositional logic) how about logical and?

structure And (a b : Prop) : Prop where
  /-- `And.intro : a → b → a ∧ b` is the constructor for the And operation. -/
  intro ::
  /-- Extract the left conjunct from a conjunction. `h : a ∧ b` then
  `h.left`, also notated as `h.1`, is a proof of `a`. -/
  left : a
  /-- Extract the right conjunct from a conjunction. `h : a ∧ b` then
  `h.right`, also notated as `h.2`, is a proof of `b`. -/
  right : b

And a separate infix notation definition:

infixr:35 " /\\ " => And

And is defined with structure, which is a wrapper on top of inductive that allows you to give names to the fields; in this case, left and right. It is shorthand for defining the inductive and defining the accessor functions.

To use it, we could write a statement like P /\ Q -> P as:

def and_1 (p : P  Q) := p.left

Despite the new syntax, this is entirely something you could have written in ISL+!

(define-struct intro [a b])
;; An [And X Y] is a (make-intro X Y)

(define (and_1 p)
  (intro-a p))
What is that argument p? Well, it has type P /\ Q, i.e., it is a conjunction of two propositions. That is its type: the term is simply a pair, and so we can use projections (.left) and .right) to extract out fields from the pair; in this case, the first field is the proposition of type P. We could also have used p.1.

So what this program does is, given a proof of a conjunction (a term of type P /\ Q), returns a proof of P (a term of that type).

Written in this style, both the theorem (the type) and the proof (the term) are somewhat mixed together.

In addition to logical and, we have logical or, defined as another inductive type:

inductive Or (a b : Prop) : Prop where
  | /-- `Or.inl` is "left injection" into an `Or`. If `h : a` then `Or.inl h : a ∨ b`. -/
    inl (h : a) : Or a b
  | /-- `Or.inr` is "right injection" into an `Or`. If `h : b` then `Or.inr h : a ∨ b`. -/
    inr (h : b) : Or a b

Which similarly has infix syntax:

infixr:30 " \\/ " => Or

Again, this could be defined in ISL+:

(define-struct inl [a])
(define-struct inr [b])
;; An [Or X Y] is one of:
;; - (make-inl X)
;; - (make-inr Y)

Unlike And, logical or is an inductive type with two cases: Or.inl and Or.inr. This is a constructive view of logical disjunction: that if you have a proof of A \lor B, that should mean that you either have a proof of A or a proof of B. Thus, the two ways of constructing A \/ B are constructors that take terms of type A and B.

To use a disjunction, we end up having to consider the two cases: given a term (proof) of type A \/ B, we must handle the case that either A was true or B was true. This means that, unlike with And, there are no projections. If x : A \/ B, we don’t have x.1 : A and x.2 : B. Rather, we have two possibilities for x: it is either Or.inl a or Or.inr b, where a : A and b : B.

We can deal with that in code using pattern matching, as follows:

def or_1 (pq : P  Q) (hp : P  R) (hq : Q  R) : R := 
  match pq with 
  | Or.inl p => hp p
  | Or.inr q => hq q

Here, we handle the two possibilities, and invoke the corresponding function (hp or hq) accordingly.

This is exactly analogous to how we program with itemizations in ISL+. The same function could be defined with out data definition for Or above as:

; or_1 : (P Q R) [Or P Q] [P -> R] [Q -> R] -> R
(define (or_1 pq hp hq)
  (cond [(inl? pq) (hp (inl-a pq))]
        [(inr? pq) (hq (inr-b pq))]))

Logical not is defined not directly, but by way of logical falsehood:

def Not (a : Prop) : Prop := a  False