On this page:
1 Purpose
2 Outline
3 Intuitionistic Logic
4 Linear Logic
5 Want to learn more?
8.7

Lecture 31: Linear Logic

1 Purpose

Talk about linear logic.

2 Outline

Last time, we talked about how there is not a single logic, and indeed, the logic inside Lean, commonly called constructive (or intuitionistic) logic is different from the classical logic commonly used in mathematics. But there are actually many more logics, some of which have particular use when thinking about programs.

In this lecture, we will talk about another of these logics: linear logic. But first, in order to introduce the formalism necessary to talk about it, we’ll review the constructive (or intuitionistic) logic we’ve been using in Lean for most of the semester.

3 Intuitionistic Logic

It turns out that in order to deal with linear logic, we’ll end up having to be a lot more precise about how we talk about variables. Up until now, whenever we have written logical formulas on paper, we assume variables we mention are available. In Lean, of course, we have to be more precise: variables are introduced by forall, and are in scope within the body of the expression. i.e., forall P : Prop, e allows P to be mentioned anywhere in e.

Another way we can describe this type of system is via what are called inference rules. They allow us to be very precise about how formulas can be constructed. And in particular, they allow us to talk about where variables are stored, in what are typically called environments (or contexts).

Here are inference rules for (some of) constructive (intuitionistic) logic:

-------- HYP

Γ,A ⊢ A

This is the simplest rule, which says "From A, and any other hypotheses in the environment Γ, we can conclude A."

Logical connectives (and, or, etc) have introduction and elimination rules. For example, the introduction rule for and is:

Γ ⊢ A    Γ ⊢ B

---------------- AND-I

Γ ⊢ A ∧ B

This corresponds to the constructor And.intro in Lean.

The elimination rules (there are two) are:

Γ ⊢ A ∧ B

---------------- AND-E1

Γ ⊢ A

Γ ⊢ A ∧ B

---------------- AND-E2

Γ ⊢ B

We can do the same for or:

Γ ⊢ A

---------------- OR-I1

Γ ⊢ A ∨ B

Γ ⊢ B

---------------- OR-I2

Γ ⊢ A ∨ B

i.e., these correspond to the constructors Or.inl and Or.inr.

Functions (or implications) also have introduction and elimination rules. The introduction rule is:

Γ,A ⊢ B

---------------- FUN-I

Γ ⊢ A → B

And its elimination rule, which corresponds to function application, is:

Γ ⊢ A → B    Γ ⊢ A

---------------- FUN-E

Γ ⊢ B

That is a new notation on something that should be quite familiar by now, as it is what you have been using to prove things in Lean.

For example, if you wanted to prove a theorem

P ∨ Q → (P -> Q) -> Q

you would do the proof, as a derivation, as follows:

-------------------HYP    ---------HYP

Γ...,P -> Q ⊢ P -> Q     Γ...,P ⊢ P

----------------------------------- FUN-E

                                              -----------HYP

Γ, P ∨ Q, P -> Q, P ⊢ Q                       Γ..., Q ⊢ Q

--------------------------------------------------------- OR-E

Γ, P ∨ Q, P -> Q ⊢ Q

--------------------- FUN-I (twice)

P ∨ Q → (P -> Q) -> Q

As an exercise, you could do the same proof in lean, and see how the steps you do exactly correspond to the rules applied in the derivation tree (intros, apply, assumption, etc).

4 Linear Logic

Linear logic starts from a fundamental idea: that there are resources that cannot be duplicated (and also, most likely have to be used). In particular, hypotheses are treated this way.

One reason why this has been interesting for programming almost as long as it has been around is that the management of resources is a fundamental part of programming at the lowest level, and in particular, high performance, low-resource programming often involves lots of careful management of resources.

Many languages have experimented with using features derived from linear logic, with the most successful being Rust: the fundamental invariant of Rust is that resources have ownership, and that ownership has to be managed carefully. In particular, there can only be a single owner who can mutate the resource, which is the same invariant as comes from linear logic.

So how does this type of principle show up in logic? All hypotheses must be used, and they must be used exactly once. Now that we have a precise way of describing proof rules (and how they relate to environments), we can translate the above rules into analogs (some of which are closer than others) in linear logic:

-------- HYP

A ⊢ A

Here, the difference is that since all resources must be consumed, by the time we get down to using a single hypothesis, there should be nothing else in the environment.

Conjunctions turn into what is called a tensor:

Δ ⊢ A    Δ' ⊢ B

---------------- TENSOR-I

Δ,Δ' ⊢ A ⊗ B

Here we note a few key differences: first, we denote linear environments with the uppercase greek delta (Δ), and since hypotheses in those environments can only be used once, the hypotheses needed to prove A must be distinct from those used to prove B. So those are two differente environments, and are then combined to make an environment that can prove A ⊗ B.

The elimination rule for tensors is:

Δ ⊢ A ⊗ B   Δ, A, B ⊢ C

----------------------- TENSOR-E

Δ ⊢ C

Note first, there is only a single elimination rule. This is because we cannot simply project out the first element: to do so would be to forget about the resources used to prove the second element. So we must use both parts (A and B) to prove C.

We don’t have or, but we have something that looks a bit similar:

Δ ⊢ A     Δ ⊢ B

---------------- LAZYPAIR-I

Δ ⊢ A & B

Note that both use the same resources, as the idea is that we can only use one of them: the consumer of this chooses which one, via the two elimination rules:

Δ ⊢ A & B

---------------- LAZYPAIR-E1

Δ ⊢ A

Δ ⊢ A & B

---------------- LAZYPAIR-E2

Δ ⊢ B

The last bit are functions, which are called "lolli"s (since the arrow looks like a lollipop):

Δ, A ⊢ B

---------------- LOLLI-I

Δ ⊢ A -o B

Introduction is the same as before, but elimination is a bit different:

Δ ⊢ A -o B   Δ' ⊢ A

----------------------- LOLLI-E

Δ,Δ' ⊢ B

i.e., the resources used to prove the function (implication) and the resources used for the argument are distinct, so when you apply it, you end up consuming both.

5 Want to learn more?

This, and much more, is in notes by Frank Pfenning at https://www.cs.cmu.edu/~fp/courses/15816-f01/handouts/linear.pdf.