This post is the first in a series on Logic. These ideas are very useful in understanding many important papers on programming language theory, especially papers on type theory and the lambda calculus. I will start with an explanation of sequents and Sequent Calculus as a system for doing logic, then I'll dive into linear logic in the next post. I'll finish the trilogy with a third post on Par and computational interpretations of classical logic.
I'm not going to present all the rules of any logical system, in this post. Instead I'll just give examples of rules in each system, to explain how they work and help you read and write such rules yourself in the future. This is also partly because different sources present the same systems with slightly different, but ultimately equivalent, rules.
Logic, as a field of study, has thousands of years of history. But in the early 1900s it exploded into new and interesting directions, as researchers in mathematics looked to logic to try and understand what it was that they were doing and why they could trust it. A brilliant logician named Gerhard Gentzen (unfortunately a Nazi, though some argue it was apathetically, or even under duress) developed the following notation for reasoning in 1934, which will be important to understand for the rest of the post. Why do you need a notation to understand this post? Some mathematical notations are brilliant for revealing what's really going on and instilling the perfect intuition in your brain, and Gentzen's notation is truly one of the best in this category.
This is called a "sequent notation," because the pair of (possibly empty) lists of symbols surrounding a "turnstile"
The horizontal bar is describing "inputs," on the top, and the "output," on the bottom. This lets you create chains, like so:
This notation is used in two ways: "inference rules" and "derivation trees." An inference rule only has one horizontal line, so only one "top" and one "bottom." A derivation tree is a chain (like the one above) of inference rules, where each horizontal line (called a "step," when talking about derivation trees) must match an inference rule. So the chain above, if it's a derivation tree, might be using the following two inference rules:
Notice how the
Unfortunately I really do mean "typically," and not "always:" there are many, many exceptions to this, and you'll often find yourself using context clues to decipher sequent notation.
If the same metavariable appears multiple times on the top of a sequent, then those parts of the "input" sequent have to be equal to each other for it to be considered a valid match, because each metavariable can only match one thing, even if multiple times. Note that metavariables can stand in for some component of an expression, an entire expression, or even multiple adjacent expressions in a list, but generally not for the whole sequent (both lists). Understanding what a metavariable is supposed to stand in for unfortunately also requires context clues! But when you see enough of these you quickly develop trustworthy expectations and intuitions, and the examples in this post will help with that.
Lastly, the bottom of an inference rule will often mention the metavariables on the top, which just means "whatever that metavariable matched in the input, fill that in here in the output." If there's a metavariable on the bottom that isn't mentioned on the top, then it could be filled with anything that the metavariable could match; it's up to you! You'll see clarifying examples in a moment.
Let's do some reasoning with Gentzen's notation! The left side of the turnstile
Allow me to emphasize: we are now giving an interpretation to the sequent notation. This is not the inherent meaning of the notation. This meaning comes from our choice of inference rules, which we've carefully designed to simulate formal logic. The underlying system, the sequent notation, is still just a dumb symbol rewriter, which we configure by our choice of inference rules.
Here are some simple inference rules for reasoning around conjunctions:
Here the capital greek letters
This says that no matter what (empty top), if you assume something then you can conclude it (
The Exch ("Exchange") rules let us reorder the lists, the Weak ("Weakening") rules let us introduce more assumptions or conclusions than we need (making the proof "weaker"), and the Contr ("Contraction") rules let us remove duplicate assumptions or conclusions.
Now we can start proving things, by constructing derivation trees. As long as the top is all "Axiom" steps and each step matches an inference rule, the derivation tree is a proof of the bottom sequent!
What we have so far is called the Sequent Calculus, and it's so explicit and mechanical that it's a nice object of study in the field of Logic. But so many of these steps are administrative junk, so here's a tree with only the steps that matter to humans:
This is a system called Sequent Natural Deduction. Only one proposition is allowed on the right of a turnstile. To see if two sequents match (so we can create a chain), we don't just look at what's written, but also imagine all the different ways of applying structural rules, to see if that produces any matches. The resulting system is much more complicated to study, but much nicer to use. One might say that a natural deduction proof distills the essence of the proof, presenting only the steps that matter. But there is an even more "natural" system than Sequent Natural Deduction, namely Natural Deduction:
Since a Sequent Natural Deduction sequent must always have exactly one proposition on the right side of the turnstile, it's possible to simply drop the assumptions and turnstile like this and have something that makes sense. Because really, the assumptions that matter for a proposition can always be found directly above it! As an object of study in the field of Logic, natural deduction is even more challenging than sequent natural deduction, exactly because it leaves more implicit so it can look like human reasoning.
(I'm borrowing the Stanford Encyclopedia of Philosophy's terminology here, because I like it, but people typically call sequent natural deduction and natural deduction both just "natural deduction," without distinction.)
Notice that in the Sequent Calculus, the rules for conjunction either introduce conjunctions on the left or introduce conjunctions on the right. Sequent natural deduction (and therefore natural deduction, too) instead uses "introduction rules" and "elimination rules." Here are examples:
Notice how all the rules are kind of duplicated, for operating on each side of the turnstile. That said, natural deduction doesn't have the right contraction and right weakening rules, or any other rules that would require or introduce multiple propositions on the right of the turnstile. It's also worth being aware that conjunction on the left is equivalent to the comma (hence the funny left conjunction elimination rule) so the left conjunction introduction rules are basically just left weakening.
Now we've got way too many rules, it appears. Each rule is duplicated because there's one for the right of the turnstile and one for the left. One-sided sequent calculus can be done instead, where we just do everything on the right of the turnstile. To do things this way, let's start with introduction and elimination rules for negation ("not"). We do introduction and elimination rules instead of left-introduction and right-introduction rules because now we only have one side!
These rules are starting to look very strange. It helps to remember that the comma here (on the right of the turnstile) is disjunction, so a one-sided sequent calculus is saying "one of these propositions is true."
Now, disjunction and negation is all you need to define the other logical constants in classical logic. For example,
If the introduction and elimination rules for conjunction aren't making sense with how you understand conjunction, think of it this way. The comma on the right is disjunction, so at least one proposition in the list is true. Let's look at the two
What you'll hopefully notice is that in the one-sided sequent calculus you can pretend that something is on the left simply by negating it. This is thought of as a deep beauty of logic, that the structure of the assumptions is just a negation of the structure of the potential conclusions. I'll explore this beauty more in future posts. It's this fact about simulating one side from the other with negation that allows one-sided sequent calculi to avoid duplicating rules (since there are no left rules), and thus have far fewer rules overall.
As elegant as the one-sided sequent calculus is, it isn't used much in the field. This is because it is inherently classical. Note that our Axiom rule became the Law of Excluded Middle! Intuitionistic logic requires a left side, because implication is no longer equivalent to disjunction with a negation, and the left and right rules are no longer negations of each other. The typical rules for a comma on the right don't really match the funny way disjunction behaves in intuitionistic logic, so intuitionistic logic is always done as (sequent) natural deduction, with only one proposition on the right, which may be an intuitionistic disjunction. You could theoretically redefine the comma on the right to be intuitionistic disjunction, since right now the right comma in intuitionistic logic is undefined, but there would be no benefit to doing so. Intuitionistic disjunction has completely different rules, which aren't dual to or comparable to the comma on the left at all, and a disjunction symbol on the right serves its purpose well enough without needing to introduce anything extra like special right-comma rules. Here are the intuitionistic disjunction inference rules:
The connection between natural deduction and intuitionistic logic is fairly strong. (One might say natural!) Classical natural deduction is possible but famously disappointing/unsatisfactory. It can be done by introducing a second Axiom rule that matches our (classical) one-sided sequent calculus's Axiom rule, that is, the law of excluded middle:
However, the useful thing about intuitionistic logic is that it can be given computational "proof terms." These are expressions in some typed lambda calculus that have a normal form (don't loop forever during computation) only if their type is equivalent to a true proposition of intuitionistic logic. Hence the name "proof terms:" the existence of a fully-evaluated expression of a type is itself a proof of the corresponding proposition. In general, the study of classical logic doesn't use proof terms, though in later posts I'll explore ways that you can, in fact, give proof terms for classical logic. But for now I'll focus the discussion on the simple case of intuitionistic logic.
For simplicity we'll limit intuitionistic logic to only implication and
Now we have these colons everywhere:
Since metavariable use is a little more complex here, I'll introduce the proper formal notation for describing them. We usually specify the behavior of metavariables with a "formal grammar," technically called a "BNF grammar" or "Backus-Naur Form," written as follows:
On the left of the
This specifies that a valid sequent (often called a "judgement" in this context), and should be thought of as equivalent to a statement like "a valid sentence is a noun phrase followed by a verb phrase." In this case it would be like "a valid sequent is a context (
Thus ends the crash course on sequent notation and sequent calculi! A sequel post on linear logic is in the works, which makes heavy and elegant use of this system.
© 2024 Ryan Brewer.