On this page:
1 Purpose
2 Outline
2.1 The halting problem
3 Other logics
8.7

Lecture 30: Why logic?

1 Purpose

Revisit why logic for computer science.

2 Outline

Some of you are probably wondering, and maybe have been wondering for a while, why are you required to take this course. Some of you are software concentrators, and probably think, I’m going to go get a job at google or amazon or whatever, and I don’t need to know any of this. Some of you are probably hotshot programmers, who think you know how to program better than any of your professors, and don’t get what this is all about.

And if all you’re going to do is write programs that look a lot like programs other people have already written, then you’re probably right: none of this may matter for you. But if your goal is to move beyond cookie-cutter programming, then at some point you are going to start bumping into fundamental problems. You may be trying to solve problems noone has solved before. And to do that, it’s worth understanding the foundations of what programming is.

Indeed, one question that is pretty important to ask yourself is: are there programs that you cannot write? Not programs you don’t know how to write yet, but programs that you cannot write, no matter how hard you try, how much you learn, how much you practice? Programs that no person can ever write? If you’re going to try to push the field forward, these are the types of things you should wonder about.

And it turns out, that logic, and logicians, people who founded this whole field, have been thinking about this for a very long time.

While the earliest idea of “thinking machines” dates back to the nineteenth century, with Charles Babbage’s design for an analytical engine (that Ada Lovelace famously wrote programs for) these were never built.

The real origins of computer science are old, but a little more recent: in the early 20th century, there were many mathematicians that were trying to figure out if mathematics could be made into a mechanical process: could theorems be solved simply by following a series of steps. Logicians were at the center of this, and one of the most important ones, and one of the important figures still for computer science was Alan Turing: a mathematician, logician, and once the name could be given, a computer scientist. The highest award in computer science is named after him. He did all sorts of things, including thinking a lot about artificial intelligence, but the two contributions he is probably most famous for is first, coming up with the mathematical model of what we call a Turing machine: an abstract machine that carried out certain operations using an infinite tape. This is still the model that is used in theoretical computer science to classify problems. The second thing he is known for is being part of a team that used actual machines in World War II to break Nazi encryption, most famously, the enigma that was used to secure wireless transmissions from Nazi submarines. Those machines, which were general purpose in a way unlike anything previous, are the first machines that could be truly called computers, and so Turing had his hands in both the foundational theoretical and the foundational practical aspects of the field. Unquestionably brilliant, he also died, likely by suicide, only a few years after the war and in his early 40s, after he was forced to undergo chemical sterilization and prevented from further government work, where most active work was happening, because he was gay, and not particularly interested in hiding it.

The other foundational connection of logic, and in particular, propositional logic, Boolean logic, to computer science comes from Claude Shannon. Shannon realized, in the later 1930s, that the ideas of Irish mathematician George Boole could be applied to simplifying the at-that-point quite ad-hoc electronics of the early computers. The masters thesis he wrote in 1937 is probably the most influential masters thesis of all time: it was about the circuits that were used for telephone relays, but what he showed is that simple circuits could solve all problems from Boolean algebra, and thus Boolean algebra could be used as a foundation to implement logical operations. That idea is what underlies all digital electronics: Apple’s latest M2 chip has 67 billion transistors: what Shannon showed (and is still used) is how to combine several transistors together to implement logical AND, OR, NOT, etc.

But we were talking about impossible programs. Are there any you can’t write?

Here’s one, and it’s almost a hundred years old: because it turns out, these logicians thought very hard about computation, and what could and could not be done. These same questions are still relevant, especially as we are trying to build machines to do ever more sophisticated things. There’s a lot of talk about machines thinking, consciousness, etc: understanding the limits of computation could go a long way to making those conversations more rational.

2.1 The halting problem

(This presentation due to S. Tripakis, A simple way to explain undecidability. 2019. hal-02068449)

Let TERMINATOR(P,x) be a program that takes as input a program P and an input x and returns YES if P terminates on input x and NO otherwise.

Now consider this program:

Q(P) := if (TERMINATOR(P,P) = YES)

        then loop forever

        else return YES.

Now, does Q(Q) terminate? Well, either TERMINATOR(Q,Q) returns YES, which means that Q(Q) halts, or it returns NO, which means that Q(Q) runs forever. But if TERMINATOR(Q,Q) returns YES, then we know Q(Q) runs forever, which means it does not terminate! And if TERMINATOR(Q,Q) returns N O, which means Q(Q) does not halt, but then Q(Q) takes the else branch and terminates. So in both cases, we get a contradiction. The only possibility, in other words, is that our supposed program TERMINATOR must not be possible to write. Not that it requires techniques that we do not know yet, but that it is literally impossible.

This is the most famous one, due to Turing, not there are plent of other impossibility results. One fascinating one, for logic, is due to Kurt Godel, who showed that any formal system would have statements that were true but unprovable. This shocked mathematicians, and is still thought to be one of the most important results of the 20th century. It gets back to something we talked about at the beginning of the semester: the difference between truth and proof.

3 Other logics

Logic allows us to ask, and sometimes answer, those kind of fundamental questions that are at the core of computer science.

But there is not a single logic: we have already talked, in passing, about how Lean is at its core a Constructive logic, whereas typically in Math we deal with Classical logic. What is the difference? Why do we care? It turns out that for computation, proof matters: while in mathematics, it doesn’t matter if a proof would take a millisecond or a million years to run (as proofs are not run!), but in computation, that difference is essential. There are various formulations, but usually, the main difference is that classical logic includes the following statement:

\forall P : Prop, P \lor \neg P

Known as the law of the excluded middle, this is not provable in Lean (or any constructive logic), because it is not constructive: in constructive logic, A \or B means we have either a proof of A or a proof of B. What the law of the excluded middle says is that for any statement, it can either be proved or disproved. But there may be statements for which we may not know how to prove _or_ disprove them!

This is only sort of true: it is actually possible to give a constructive interpretation for the law of the excluded middle, as shown first in a PhD thesis by Murthy, 1990: you just need to use control operators. With call/cc, you can construct p \/ not p as follows: capture the current continuation, and then return the not p case: since not p is a function from p to false (i.e., a function that produces an impossible value), if your function is ever called (with a value of p), then you can use the captured continuation to jump back and instead use the other case by using that p. Control operators are bizarre things!

In lean, we can _include_ the law of the excluded middle, indeed Lean has a module called “Classical” with various axioms / theorems, including exactly that statement: Classical.em.

Let’s try to prove some theorems that require classical reasoning:

theorem notp_impp_p:  P : Prop, (¬ P  P)  P :=
 by intros P
    constructor
    case mp =>
      intros H
      cases (Classical.em P)
      assumption
      apply H
      assumption
    case mpr =>
      intros p np
      assumption
theorem orpq_notp_impq:  (P Q : Prop), P  Q  (¬ P  Q) :=
 by intros P Q
    constructor
    case mp =>
      intros H
      cases H
      intros
      contradiction
      intros
      assumption
    case mpr =>
      intros H
      cases (Classical.em P)
      apply Or.inl
      assumption
      apply Or.inr
      apply H
      assumption