## Lecture 16: Type Checking

### 1` `Introduction

As our language gets richer, we’re finding more reasons to reject programs that we do not want to compile: because names are not in scope correctly, or definitions hide existing definitions, or functions aren’t used with the proper number of arguments. To this list we can add another problem: using functions or operators with the wrong types of arguments. We’ve already seen the impact this has on our compiled output: we have to include tag-checking code before every operation, to ensure that the operands are plausible inputs. But such dynamic checking has drawbacks: while it’s convenient to start writing programs in a dynamically typed language, we pay the cost of repeated runtime checking, a lack of guarantees that hidden bugs aren’t still lingering in our code, and no clear attribution of how the implausible data managed to get to the operation that (mis)used it. Statically typed languages avoid these downsides, and so it makes sense for us to try to develop a type checker that can confirm, at compile time, whether or not a given input program is internally consistent with its usage of data.

As we’ll soon see, the challenge in designing a type system lies in balancing expressiveness, ergonomics, and decidability. There will always be programs that are correct, in the sense that they will never crash at runtime due to misuse of data, but that we cannot convince the compiler of their correctness. They may rely on intricate invariants, or subtle data and control-flow dependencies, that the type system isn’t rich enough to recognize and exploit. We could attempt to enhance the type system to do so, but then we force the programmer to supply enough evidence at all times to satisfy that new type system, and such evidence (in the form of type annotations) might be overwhelmingly tedious. Worse yet, some type safety properties just might not even be decidable1By Rice’s theorem, we’re inevitably out of luck if we wanted to get a correct and precise delineation between “correct” and “incorrect” programs., and it would be a shame to enhance our compiler’s type system to the point that it outright gets stuck in an infinite loop, trying to compile our program! It is not our goal right now to dive into advanced type system design, or to explore the various trade-offs in type system ergonomics. The type system we will develop here is one of the simpler ones, and understanding how it works will lay the groundwork for understanding more advanced systems.

### 2` `Type systems as formal judgements

The informal goal of a type system is to confirm that a program obeys a certain self-consistency property. We can achieve this in many ways, but as with all the other phases of our compiler, the most convenient way to do so is by examining the syntax tree of our program, and locally confirming that each node of the tree obeys the property we want. Note that we have subtly moved the goalposts here: we are measuring a particular syntactic property of our program that is sufficient but not necessary to confirm that our program is type-sound. Our syntactic property will take the form of a judgement, i.e., “we hereby judge this expression to have our desired property, assuming the following preconditions.” We’ll develop the notation for this in stages.

#### 2.1` `Values

Let’s start with the simplest cases in our language. It’s safe to agree that
all numbers in our language are, in fact, integers; likewise, `true`

and
`false`

are booleans. We will simply annotate this as

\begin{equation*}\dfrac{}{n : \mathsf{Int}}[\text{T-Int}] \qquad\qquad \dfrac{}{b : \mathsf{Bool}}[\text{T-Bool}]\end{equation*}

The name in the brackets on the side is a convenient way to refer to the particular inference rule being used, later on.

#### 2.2` `Primitives and derivation trees

Next, let’s look at one of the easier operations in our language, addition. Addition produces a number provided both of its arguments are numbers:

\begin{equation*}\dfrac{e_1 : \mathsf{Int} \quad e_2 : \mathsf{Int}}{e_1 + e_2 : \mathsf{Int}}[\text{T-Plus}]\end{equation*}

Read this aloud as “If \(e_1\) has type \(\mathsf{Int}\) and \(e_2\) has type \(\mathsf{Int}\), then \(e_1 + e_2\) has type \(\mathsf{Int}\)”. In other words, the bar acts as an if-then, while the colon acts as the phrase “has type”. Looking again at the two rules for values above, we see they’re degenerate cases of this: there are simply no preconditions needed for determining that numbers or booleans have the appropriate types. We say that such typing rules are axioms.

How might we deduce that the expression \((3 + 4) + 5\) has type \(\mathsf{Int}\)? Since the premises of our rule for plus-expressions have the same form as our judgement does, we can simply join multiple uses of the rules we have into a derivation tree, where the root of the tree is the judgement we want to prove, and the leaves of the tree are all axioms.

\begin{equation*}\dfrac{\dfrac{\dfrac{}{3 : \mathsf{Int}} \quad \dfrac{}{4 : \mathsf{Int}}}{3 + 4 : \mathsf{Int}} \quad \dfrac{}{5 : \mathsf{Int}}}{(3 + 4) + 5 : \mathsf{Int}}\end{equation*}

Analogous rules hold for most of our other primitive operations. For example,

\begin{equation*}\dfrac{e_1 : \mathsf{Int} \quad e_2 : \mathsf{Int}}{e_1 < e_2 : \mathsf{Bool}}[\text{T-LessThan}]\end{equation*}

#### 2.3` `Type variables

However, some of our primitives are more complicated. For example,
`isbool`

accepts anything as its argument, and returns a
boolean. We could write something like

\begin{equation*}\dfrac{e : \mathsf{Any}}{\mathtt{isbool}(e) : \mathsf{Bool}}\end{equation*}

where \(\mathsf{Any}\) is intended to mean “any type in our language”,
and this would work, at least so far. But if we attempted the same thing with
`print`

, which accepts anything and returns it, we’d get

\begin{equation*}\dfrac{e : \mathsf{Any}}{\mathtt{print}(e) : \mathsf{Any}}\end{equation*}

...and now we have a problem. The result type of \(\mathsf{Any}\) isn’t an acceptable premise for most of our typing rules: it’s lost too much information about its argument. Instead, we need a way to indicate that these operators are “generic”, or parametric over the type of their argument. We typically use the Greek letter \(\tau\) to indicate a type, and so we would write

\begin{equation*}\dfrac{e : \mathsf{\tau}}{\mathtt{isbool}(e) : \mathsf{Bool}}[\text{T-IsBool}] \qquad\qquad \dfrac{e : \mathsf{\tau}}{\mathtt{print}(e) : \mathsf{\tau}}[\text{T-Print}]\end{equation*}

Every time the same type-variable appears in a rule, it must mean the same type. So the latter rule says, “If \(e\) has type \(\tau\), then \(\mathtt{print}(e)\) does too.”

Likewise, `==`

compares any two values and returns a boolean.
We have a choice to make: we could permit the two values to be of distinct
types, or we could require that they be of the same type. Since the former
situation is guaranteed to produce `false`

as an answer, most type
systems require that equality take two arguments of the same type:

\begin{equation*}\dfrac{e_1 : \mathsf{\tau} \quad e_2 : \mathsf{\tau}}{e_1 == e_2 : \mathsf{Bool}}[\text{T-Eq}]\end{equation*}

#### 2.4` `Variables

How might we handle variables and let-bindings? We don’t seem to have enough information in our typing judgement to keep track of what variables are in scope, and of what types. So we need to generalize our judgement form to

\begin{equation*}\Gamma \vdash e : \mathsf{\tau}\end{equation*}

which is read, “In context \(\Gamma\), expression \(e\) has type \(\tau\).” A context is simply a list of bindings of variable names to types. We’ll use it as follows:

\begin{equation*}\dfrac{\Gamma(x) = \tau}{\Gamma \vdash x : \mathsf{\tau}}[\text{T-Var}] \qquad\qquad \dfrac{\Gamma \vdash e : \mathsf{\tau_1} \quad \Gamma, x : \tau_1 \vdash b : \mathsf{\tau_2}}{\Gamma \vdash (\mathtt{let\:} x : \tau_1 = e \mathtt{\:in\:} b)_{\tau_2} : \mathsf{\tau_2}}[\text{T-Let}]\end{equation*}

The first rule confirms that a variable has a given type if the context maps that variable to that type. The second rule manipulates the context: first, it typechecks the let-binding itself in the original context \(\Gamma\). Assuming that succeeds, it typechecks the body at the desired type (\(\tau_2\)) in a context that has been enhanced with the binding for \(x\). By convention, because it’s easier to read typographically, we tend to add new bindings to \(\Gamma\) on the right side of the context, nearest the turnstyle, and newer bindings implicitly shadow older bindings. In actual implementations, we would likely use a linked list, and bindings near the head of the list shadow bindings in the tail of the list.

(Note that if we had not provided \(\tau_1\) or \(\tau_2\) as part of the syntax of our program, then we would have had to guess what they were, in order to derive the premises of this rule. This is why languages without type inference require the programmer to annotate the types of their variables, or return types of their functions, etc.)

All of our other rules do not manipulate bindings, so they just pass the context through unchanged. For example,

\begin{equation*}\dfrac{\Gamma \vdash e_1 : \mathsf{\tau} \quad \Gamma \vdash e_2 : \mathsf{\tau}}{\Gamma \vdash e_1 == e_2 : \mathsf{Bool}}\end{equation*}

Likewise all the other rules we’ve named above can be similarly generalized.

#### 2.5` `Functions and function calls

Suppose we have a function definition and use:

```
def add(x : Int, y : Int) -> Int:
x + y
add(3, 5) : Int
```

In order to check this program, we need to know the type for `add`

.
It’s not enough to merely record that it is a function; we need to know that it
takes two arguments, both of which must be `Int`

s, and that it returns
an `Int`

. We will write this as a type
\((\mathsf{Int}, \mathsf{Int} \rightarrow \mathsf{Int})\). Conversely,
when we apply a function, we need to be sure that the application supplies the
correct number of arguments, of the correct types, and then we can derive the
final type as needed. Formally, the application typing rule looks as follows:

\begin{equation*}\dfrac{\Gamma \vdash f : \mathsf{(\tau_1, \ldots, \tau_n \rightarrow \tau_r)} \quad \Gamma \vdash e_i : \mathsf{\tau_i}}{\Gamma \vdash f(e_1, \ldots, e_n) : \mathsf{\tau_r}}[\text{T-App}]\end{equation*}

To typecheck a function body, we simply bind all the arguments at their annotated types and add them to \(\Gamma\), then typecheck that the body has the requested result type:

\begin{equation*}\dfrac{\Gamma, x_1 : \tau_1, \cdots, x_n : \tau_n \vdash b : \mathsf{\tau_r}}{\Gamma \vdash \mathtt{def\:}f(x_1, \ldots, x_n)\:\:\mathtt{{-}{>}}\:\tau_r\mathtt{\::\:}body : \mathsf{(\tau_1, \cdots, \tau_n \rightarrow \tau_r)}}[\text{T-Fun}]\end{equation*}

#### 2.6` `Typechecking entire programs

When considering an entire program, there are no variables in scope. So our program must type-check in the empty context.

Once we’ve typechecked a function definition, though, we must add it to our resulting context for the rest of the program.

### 3` `Implementing the type system

We need a type definition to describe the syntax of our types:

```
type 'a typ =
| TyVar of string * 'a (* type variables like tau_1 *)
| TyCon of string * 'a (* type constants like Int or Bool *)
| TyArr of 'a typ list * 'a typ (* Arrow types, containing argument types and return type *)
| TyBlank of 'a (* the programmer didn't write down a type at all *)
(* for later use *)
| TyApp of 'a typ * 'a typ list (* Type applications, like `int list`, if we add them *)
```

The first three type constructors are straightforward. The `TyBlank`

type
annotation is slightly unusual, in that it indicates the lack of a type
annotation. Any program with `TyBlank`

s in it clearly cannot be
typechecked, since some types were not given! We introduce this so that all our
Diamondback programs can be parsed, even without type annotations —`TyApp`

constructor is subtle, and refers to
instantiating generic types, such as writing `int list`

or
`(int, string) result`

. We won’t use it right now (since we don’t have any generic types in
our language yet), but it’s useful to add it now for future-proofing sake.

Do Now!

Design a function

`string_of_typ : 'a typ -> string`

that renders a type as a string of concrete syntax, suitable for error messages or other human reading.

Our type checker will effectively be a function

```
let rec type_check_prog env prog : bool =
match prog with
| Program(defns, body, typ, _) ->
let env' = ... do stuff with the definitions ... in
type_check_exp env' body typ
and type_check_exp env exp typ : bool =
...
```

that takes an initial environment (wherein we include any primitive operator types), the expression to check, and the desired type, and returns whether the expression has that type or not. Given the engineering in our compiler pipeline so far, it will turn out to be more convenient for our typechecker to either return the program it was given (i.e. act as an identity function), or throw an exception.

The inference rules above will turn into pattern-matching rules in
`type_check_exp`

, in a very natural way: we built our rules such that for
every expression form in our language, there is at most one typing rule that
applies. This design is known as a syntax-directed type system, and it
makes our implementation very straightforward. We show just a few of the rules
below, and leave the rest as an exercise:

```
(* In environment `env`, does expression `exp` have type `typ`? *)
let rec type_check_exp (env : (string * unit typ) list) (exp : pos expr) (typ : unit typ) : pos expr =
match exp, typ with
| _, TyBlank _ -> failwith "Program is not fully annotated, and cannot be typechecked"
| ENum _, TyCon("Int", _) -> exp (* by T-Int *)
| EBool _, TyCon("Bool", _) -> exp (* by T-Bool *)
| EPrim2(Less, e1, e2, loc), TyCon("Bool", _) ->
type_check_exp env e1 (TyCon("Int", ()));
type_check_exp env e2 (TyCon("Int", ()));
exp (* by T-LessThan *)
| ...
| _, _ ->
(* if none of the rules above worked, then we don't typecheck *)
failwith "Type error occurred"
```

(I’ve specialized the type parameters for `typ`

to `unit`

because I don’t
need any extra information in my types here; you might choose any of several
possibilities for bookkeeping assistance, or you might just leave it as
`unit`

and not use it for now.)

Do Now!

Implement the remaining clauses for the other typing rules we’ve discussed above. What should the \(\text{T-Print}\) typing rule look like? How about \(\text{T-App}\) (Hint: you might want to use

`List.fold_left2`

for that rule.)

Several elaborations of this sksleton are possible, and might improve the
usability of the typechecker. By taking in a `pos expr`

, we can report the
location of a type error, and by using `string_of_typ`

we can render a
message along the lines of `"Expected type ... at location ..."`

. It is
possible that type-checking of full programs needs to manipulate environments
more than our skeleton has, so we might want to enhance the return type of the
function to return an accumulated environment of some sort.

#### 3.1` `When should typechecking occur?

Looking at our pipeline, there is a tradeoff: if we run the typechecker after desugaring or simplifying our program, we might get away with fewer rules for fewer expression forms. This is good: the typechecker must be trusted code, since it is a gatekeeper for validating programs as being safe enough to compile. But if we operate over a transformed program, then our reported error messages might not bear any resemblance to the original program as written. Since type errors are critical for programmers to be able to fix their code, this ergonomic problem is a serious issue. Accordingly, our typecheckers will run just after checking for well-formedness, so that we know that all names are properly scoped; any AST transformations can wait until after typechecking is complete.

1By Rice’s theorem, we’re inevitably out of luck if we wanted to get a correct and precise delineation between “correct” and “incorrect” programs.