\documentclass[12pt,a4paper]{book} 
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{stmaryrd}
\usepackage[a4paper]{geometry}
\usepackage{graphicx}
\RequirePackage[12tabu, orthodox]{nag}
\usepackage{microtype}
\usepackage{proof}
\usepackage{hyperref}
\usepackage{cs2,epsfig}
\usepackage{wrapfig}
\usepackage{cancel}

\usepackage{fancyheadings}
% \thispagestyle{fancy}
\pagestyle{fancy}


\newcommand{\missingbit}[1]{{\bf #1}}
\newcommand{\pand}{\ensuremath{\wedge}}
\newcommand{\por}{\ensuremath{\vee}} 
\newcommand{\pimp}{\ensuremath{\mathrel{\rightarrow}}} 
\newcommand{\pnot}{\ensuremath{\mathop{\neg}}} 
\newcommand{\adef}{::}
\newcommand{\athen}{\; then \;}
\newcommand{\aor}{\; or \;}
\newcommand{\ain}{\; \Leftarrow \;}
\newcommand{\aout}{\; \Rightarrow \;}

\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\newtheorem{exercise}{Exercise}
\newtheorem{example}{Example}
\newtheorem{problem}{Problem}


\begin{document}
%\setlength{\headrulewidth}{0pt}
%\addtolength{\headheight}{10pt}

%\tableofcontents

\chapter*{Notes on Deduction and Entailment} \label{inference}
\lhead{{\small\textsl{Informatics 1}}}
\rhead{{\small\textsl{Computation and Logic }}}

\noindent
Consider the following argument\footnote{This example comes from
  \emph{Set Theory and Logic}, by Robert R. Stoll (Dover 2003) p190.}\\
\\
\textbf{Assumptions:\ }
If the races are fixed or the gambling houses are crooked, then the tourist trade will decline.\\ 
If the tourist trade declines then the police force will be happy.\\
The police force is never happy.\\
\textbf{Conclusion:\ }
The races are not fixed.
~\\

We introduce a number of propositional variables to reduce the problem
to propositional form:
\begin{align*}
  \qquad\textbf{Assumptions:}&&
  \textrm{RF}\vee\textrm{GC}\rightarrow &\textrm{TTD}\qquad\qquad\\
  &&\textrm{TTD}\rightarrow&\textrm{PH}\\
  &&\neg &\textrm{PH}\\
  \textbf{Conclusion:}&&
  \neg &\textrm{RF}
\end{align*}
We say this inference is valid iff, in any situation, all of
the Assumptions are true then in that situation the Conclusion must be true.

If we want to check the validity of an inference in propositional logic, we can
use truth tables. The inference is valid if there are no
counterexamples.
We can also use resolution to show that there are no counter-examples.

However, it is sometimes simpler, and often more
persuasive if we can find a chain of reasoning that justifies the
passage from assumptions to conclusion.

We can present an argument for this conclusion as follows
\[
\infer{\neg\textrm{RF}}{
  {\infer{\neg\textrm{RF}\wedge \neg\textrm{GC}}
    {\infer{\neg(\textrm{RF}\vee\textrm{GC})}
      {(\textrm{RF}\vee\textrm{GC})\rightarrow \textrm{TT}&
        \infer{\neg \textrm{TT} }
        {\textrm{TT}\rightarrow\textrm{PH}&\neg\textrm{PH}}
      }
    }}}
\]
Here, each horizontal line indicates that any valuation that makes the
assumptions
above the line valid will also make the conclusion below the line valid.

The point of such a presentation is that we break the argument down
into small steps. We then argue, if argument is needed, about
each of the steps. For example, we can argue that the last two steps
of this derivation are sound by appeal to de Morgan, and the meaning of
$\wedge$\,. 

The remaining two steps of this inference share a common form. They are both instances of
a form of argument that goes by the resonant name of \emph{modus tollendo tollens}\footnote{Latin for "the way that denies by denying".} :
\[\infer[\emph{modus tollendo tollens}]{\neg X}{X\rightarrow Y&\neg
  Y}\]
The study of such ``forms of argument'' goes back to ancient Greece. We
can easily check the soundness of the rule. Any valuation making both of
the assumptions true (the checking is easy because there is only one such) 
makes the conclusion true. The rule is a general rule because we can
substitute any expressions for $A$ and $B$.
\[\textrm{This single rule has (infinitely) many instantiations.}\]

We have already seen several examples of sound rules. Here are some
more classical forms:\footnote{The Latin names follow a pattern. For
  example \emph{modus ponendo tollens} is the mode that by affirming A
  denies B, while \emph{modus tollendo ponens} is the mode that by denying A
  affirms B. The mode that affirms B by affirming A, \emph{modus
    pollendo ponens} is familiarily known as \emph{modus ponens}. }
\begin{gather*}
\infer[\emph{modus tollendo tollens}]{\neg A}{A\rightarrow B&\neg B}\qquad
\infer[\emph{modus tollendo ponens}]{B}{\neg A &A\vee B}\\
\infer[\emph{modus ponendo tollens}]{\neg B}{A&\neg(A\wedge B)}\qquad
\infer[\emph{modus ponendo ponens}]{B}{A&A \rightarrow B}
\end{gather*}
If we express all the assumptions in clausal form, these can
themselves be seen as instances of a common pattern.
\begin{gather*}
\infer[\emph{modus tollendo tollens}]{\neg A}{\neg A\vee B&\neg B}\qquad
\infer[\emph{modus tollendo ponens}]{B}{\neg A &A\vee B}\\
\infer[\emph{modus ponendo tollens}]{\neg B}{A&\neg A\vee \neg B}\qquad
\infer[\emph{modus ponendo ponens}]{B}{A&\neg A \vee B}
\end{gather*}
We have seen how we can construct arguments from such rules by
building trees. The nodes of these trees are expressions, and each
rule derives a valid conclusion from valid premisses.

Of course this is not the only chain of reasoning\footnote{\emph{Chain
    of reasoning} is perhaps an unfortunate term,
since most reasoning is structured in trees rather than chains.} we
could have used. For example,  we can recast our deduction as follows
\[
\infer{\neg\textrm{RF}}
{
  \textrm{RF}\rightarrow \textrm{RF}\vee\textrm{GC}&
  {\infer{\neg(\textrm{RF}\vee\textrm{GC})}
    {(\textrm{RF}\vee\textrm{GC})\rightarrow \textrm{TT}&
      \infer{\neg \textrm{TT} }
      {\textrm{TT}\rightarrow\textrm{PH}&\neg\textrm{PH}}
    }
  } 
}
\]
We now have three instances of \emph{tollendo tollens}, and an
additional assumption, $\textrm{RF}\rightarrow
\textrm{RF}\vee\textrm{GC}$. We can easily check the validity of the new
assumption, using a truth table.



We say a step is \emph{sound} if it produces valid consequents from
valid antecedents. Once 
we agree that each step is sound, we must
agree that the conclusion follows from the assumptions. We then say that
the argument is valid, and write:
\[  \textrm{RF}\vee\textrm{VC}\rightarrow \textrm{TTD},\ 
  \textrm{TTD}\rightarrow\textrm{PH},\ 
  \neg \textrm{PH}\vdash \neg \textrm{RF}\]
The `turnstile' $\vdash$ signifies that we can derive this entailment
using sound rules. Since the individual steps are sound,
the conclusion follows from the assumptions. 


\noindent
We now consider a related argument

\noindent
\textbf{Assumptions:\ }
If the races are fixed or the gambling houses are crooked, then the tourist trade will decline.\\ 
If the tourist trade declines then the police force will be happy.\\
\textbf{Conclusion:\ }
If the police force is never happy, then the races are not fixed.

Is this valid? It is natural to argue that, \emph{because the original
  argument is valid} so is this one. We can express this as a rule about entailments
\[ \infer  {\textrm{RF}\vee\textrm{VC}\rightarrow \textrm{TTD},\ 
  \textrm{TTD}\rightarrow\textrm{PH},\ 
  \vdash \neg \textrm{PH}\rightarrow \neg \textrm{RF}} {\textrm{RF}\vee\textrm{VC}\rightarrow \textrm{TTD},\ 
  \textrm{TTD}\rightarrow\textrm{PH},\ 
  \neg \textrm{PH}\vdash \neg \textrm{RF}}\]
If the assumptions of this
argument, together with the assumption that the police force is never
happy, imply that the races are not fixed, then the assumptions of
this argument imply that, If the police force is never happy, then the
races are not fixed. 

We don't need to look at the
justification of the original argument to see this; it's just a fact
about deduction and implication. 
We can express this fact as a general rule
about entailments, called the \emph{deduction theorem}.\footnote{ This rule called the
  \emph{deduction theorem} because for some axiomatisations of logic,
  it is not taken as a primitive, but proved as a metatheorem.} This rule says that to show that $A\rightarrow B$ it suffices to
assume $A$ and show $B$.
\emph{If, from some premisses, $\Delta$, together with $A$ we can infer
$B$, then from $\Delta$ we can infer that $A\rightarrow B$.}

We now shift our focus. Instead of looking at the forms of argument
that can be combined to produce valid inferences, we want to
characterise valid inferences. We introduce a notation, $\Gamma\vdash
A$ for the relation \emph{from $\Gamma$ we can infer $A$}, where
$\Gamma$ is a set of expressions, and $A$ is an
expression.\footnote{In the next section, we will extend the use of
  $\ \vdash$ to include the possibility of multiple alternative conclusions.}
We can then write the deduction theorem symbolically.
\[ 
\vcenter{\hbox{\infer[(\rightarrow^+)]{\Delta \vdash A\rightarrow B}{\Delta, A\vdash B}}}
\qquad \qquad
\vcenter{\hbox{\infer*{\textrm{B}}{\textrm{A}& \Delta}}}
\quad \Rightarrow\quad 
\vcenter{\hbox{\infer*{\textrm{A}\rightarrow\textrm{B}}{\textrm{\cancel{A}}&
      \Delta}}}\]
The rule on the left expresses the relationship beteen the two proof
trees on the right.

We can use the turnstile ($\vdash$) notation to express facts about inference;
for example, the fact that
inference trees can be linked together to form larger inferences.
\[
\vcenter{\hbox{\infer[Cut]{\Gamma, \Delta \vdash B}{\Gamma \vdash A&\Delta, A \vdash
  B}}} \qquad\qquad
\vcenter{\hbox{\infer*{A}{\Gamma}}}\quad
\vcenter{\hbox{\infer*{B}{\Delta&A}}}
\quad 
\Rightarrow\quad 
\vcenter{\hbox{\infer{B}{\Delta&\infer*{A}{\Gamma}}}}\]
We can also write more rules 
$\wedge,\vee,\rightarrow$.
 \[
  \begin{matrix}
    &\infer[(I)]{\mathcal{A},X\vdash X}{}&
    \\
    \\
    \infer=[(\wedge)]{\mathcal{A}\vdash X\wedge Y}{\mathcal{A}\vdash X&\mathcal{A}\vdash Y}
    &
    \infer=[(\vee)]{\mathcal{A},X\vee Y\vdash Z}{\mathcal{A}, X\vdash
      Z&\mathcal{A}, Y\vdash Z}
    &
    \infer=[(\rightarrow)]{\mathcal{A}\vdash X\rightarrow  Y}
    {\mathcal{A},X\vdash Y}
  \end{matrix}
  \]
Here, $\mathcal{A}$ is a variable over sets of expressions of
  propositional logic, and $X$, $Y$ and $Z$ are variables over expressions
  themselves. 
 We read the `turnstile' $\vdash$
  symbol as \emph{entails}.

The \emph{immediate} rule $(I)$ has no assumptions. The double line
used for the other three rules means that the rule can be used in
either direction. The entailment below the double line is valid iff
\emph{all} of the entailments above the line are valid.
Read from top to bottom, they are called \emph{introduction rules}
  ($\circ^+$), 
since they introduce a new connective into the argument. Read from 
bottom to top, they are \emph{elimination rules} ($\circ^{{}-{}}$) since a
connective is eliminated.

It has been argued that such rules
 encapsulate the meanings of the connectives, by saying how they can
 be used in arguments.  For this
 reason,  we call these \emph{natural deduction} rules. Thus, for example, to argue something from an
 assumption the $X\vee Y$, we must be prepared to argue it on the
 assumption that $X$ and also on the assumption that $Y$.

These rules allow us to produce \emph{valid}
entailments. We say that $\mathcal{A}\models X$
iff every valuation that makes the premisses $A\in \mathcal{A}$ \emph{true} also
makes the conclusion $X$ \emph{true}. We want to ensure that if
$\mathcal{A}\vdash X$ then $\mathcal{A}\models X$ For example, the
entailment produced by the
rule $(I)$ is certainly valid, since $X$ occurs on both sides of the turnstile.
It is easy to check that (\textit{Cut}) and all the natural deduction rules preserve validity.

Proofs using these rules typically use a mixture of introduction and
elimination rules. Here is a simple example.
\begin{example}\label{inf-swap}
\[\infer[(\rightarrow^+)]{A\rightarrow (B \rightarrow C)\vdash B\rightarrow (A
  \rightarrow C)}
{\infer[(\rightarrow^+)]{A\rightarrow (B \rightarrow C),B\vdash A
  \rightarrow C}
{\infer[(\rightarrow^-)]{A\rightarrow (B \rightarrow C),A,B\vdash C}
{\infer[(\rightarrow^-)]{A\rightarrow (B \rightarrow C)A\vdash B\rightarrow C}
{\infer[(I)]{A\rightarrow (B \rightarrow C)\vdash A\rightarrow (B \rightarrow C)}
{}}}}}\]
\end{example}
Such proofs may be natural, but in more complex cases they are not
always easy to find.

\begin{exercise}\label{imp-proof}
Using the cut rule we can build the following proof tree

\[\infer[\textit{Cut}]{A\wedge B \vdash A\vee B}
{
\infer[(\wedge^-)]{A\wedge B\vdash A}{\infer[(I)]{A\wedge B\vdash A\wedge B}{}}
&
\infer[(\vee^-)]{A\vdash A\vee B}{\infer[(I)]{A\vee B\vdash A\vee
    B}{}}}\]
Can you prove this conclusion from the rules of natural deduction without using the cut rule?
\end{exercise}

The alert reader will have noticed that negation is conspicuous by its
absence from the discussion above. The natural deduction system we
have presented can be extended to include negation, but the extension
is less than natural. We will not present it here.

Instead, we present a system, introduced by Gerhard Gentzen, in which
proofs are easy to find. Thus far, our entailments have had a set of
premisses and a single conclusion. These can be viewed as conditional
assertions: the conclusion is asserted under the condition that the
premisses are true.

Gentzen introduced sequents that include multiple premisses \emph{and}
multiple conclusions. From a set of antecedents, $\Gamma$, we can infer a set of succedents $\Delta$. The intended interpretation is that if
\emph{all} of the antecedents are true then at least \emph{one} of the
succedents is true.


\begin{definition}
We
say the \emph{inference} of $\Delta$ from $\Gamma$ is \emph{valid} if every
valuation making every antecedent true makes at least one of the succedents true. In symbols,
\[\Gamma \models \Delta \qquad\textrm{iff}\qquad
\parbox{10cm}{every valuation\\
  that makes all of the antecedents, $\Gamma$, true\\
 also makes at least one of the succedents
, $\Delta$, true.}
\]
A \emph{counter-example} is a valuation that makes all of the
antecedents true, and the succedents false:
\[\Gamma \not\models \Delta \qquad\textrm{iff}\qquad
\parbox{10cm}{some valuation (a counterexample)\\
  makes all of the assumptions, $\Gamma$, true\\
 but makes all of the
  conclusions, $\Delta$, false.}
\]
\end{definition}
This definition introduces a duality between the two sides of the
expression. Because we could, equivalently, say that 
the \emph{inference} of $\Delta$ from $\Gamma$ is \emph{valid} if every
valuation making every succedent false makes at least one of the antecedents false.
In symbols,
\[\Gamma \models \Delta \qquad\textrm{iff}\qquad
\parbox{10cm}{every valuation\\
  that makes all of the succedents, $\Delta$, false\\
 also makes at least one of the antecedents 
, $\Gamma$, false.}
\]

This duality allowed Gentzen to
introduce a beautifully symmetric set of rules, which are all introduction
rules.
This means that our proofs can be goal-directed: we start with the
goal of proving the bottom line. The rules we can use to produce this
bottom line are determined by the principal connectives occurring in
the goal. Once we have chosen a rule, we remove the goal that
corresponds
to its conclusion, and replace it by the entailments above the line, which
are then new goals.
A goal-directed proof will always produce simpler and simpler
sequents
(but maybe many many simpler sequents) as our trees grow upwards.



\section*{Sound and Complete Deduction} We now introduce Gentzen's rules\footnote{Gentzen introduced many different
  sets of rules. The one we introduce here is the multiplicative variant of his system
  LK.} a set of rules that allows us to derive valid
sequents. These define a relation $\vdash$ between finite sets of
expressions. 
\begin{definition}
The relation $\Gamma\vdash\Delta$, where $\Gamma$ and $\Delta$ are
finite sets of expressions, is defined to be the smallest
relation for which all of the following rules are satisfied,

($\Gamma,\Delta$ vary over finite sets of
expressions; $A,B$ vary over expressions):
\begin{gather*}
    \infer[(I)]{\Gamma,A\vdash \Delta,A}{}\\
    \begin{matrix}
      \infer[(\wedge L)]{\Gamma ,A\wedge B\vdash\Delta}{\Gamma,A,
        B\vdash\Delta}
      & 
      \infer[(\vee R)]{\Gamma\vdash A\vee B, \Delta}{\Gamma \vdash A,B,\Delta}
      \\
      \\
      \infer[(\vee L)]{\Gamma, A\vee B\vdash \Delta}{\Gamma ,A\vdash\Delta&\Gamma ,B\vdash\Delta}& 
      \infer[(\wedge R)]{\Gamma \vdash A\wedge B,\Delta}{\Gamma\vdash
        A,\Delta&\Gamma\vdash B,\Delta}\\
      \\
      \infer[(\rightarrow L)]{\Gamma ,A\rightarrow
        B\vdash\Delta}{\Gamma\vdash A,\Delta&\Gamma, B\vdash\Delta}
      & 
      \infer[(\rightarrow R)]{\Gamma\vdash A\rightarrow B, \Delta}{\Gamma
        ,A\vdash B,\Delta}\\
      \\
      \infer[(\neg L)]{\Gamma ,\neg A\vdash\Delta}{\Gamma\vdash A,\Delta}
      & 
      \infer[(\neg R)]{\Gamma\vdash \neg A, \Delta}{\Gamma ,A\vdash\Delta}
    \end{matrix}
  \end{gather*}
\end{definition}
A relation defined in this way is said to be \emph{inductively
  defined}. The rule ($I$), with no assumptions, tells us that
whenever $\Gamma\cap\Delta \not=\emptyset$ the relation
$\Gamma\vdash\Delta$ holds. We can then use the other rules to
derive other instances of the relation.
For example, we can use these rules to prove the result of Exercise \ref{imp-proof}.
\[\infer[(\vee R)]{A\wedge B \vdash A\vee B}{
  \infer[(\wedge L)]{A\wedge B \vdash A,B}{
    \infer[(I)]{A,B \vdash A,B}{
}}}
\]
The inductive definition of $\vdash$ means that $\Gamma\vdash\Delta$
iff it is the conclusion of such a proof. So to show that some fact
holds for all instances of the relation $\vdash$ it suffices to show
that it holds for the conclusion of any proof.

To show that something holds for the conclusion of any proof it
suffices to show that it holds for the conclusion of any ($I$) rule
(the base case), and that if it holds for the assumptions of any rule
then it holds for the conclusion (inductive step). This is called
proof by induction on the proof of $\Gamma\vdash\Delta$.

In fact, if we require that if the property holds for all the assumptions of any rule
then it holds for the conclusion, then that includes both the
inductive step and the base case, because the ($I$) rule has no assumptions.

Sequents proved by Gentzen's rules have two crucial properties, both
of  which we prove by
induction. The first is soundness.
\begin{theorem}
\begin{equation*} 
\textrm{if }\Delta \vdash A \textrm{ then } \Delta \models
A\phantom{(\textbf{completeness})}
\tag{\textbf{soundness}}
\end{equation*}
\end{theorem}
\begin{proof}
This will be true for any inference system based on a set of sound
rules. That is, it suffices to show that each of the rules has the
property that if for each of its assumptions $\Gamma_i\vdash\Delta_i$ it
is true that $\Gamma_i\models\Delta_i$ and its conclusion is
$\Gamma\prime\vdash\Delta\prime$ then
$\Gamma\prime\models\Delta\prime$.

It is straightforward to check this for each of Gentzen's rules. We
can do this by showing that if there is a counter-example to the
conclusion then there is a counter-example to at least one of the
assumptions. Consider, for example,
\[
\infer[(\rightarrow L)]{\Gamma ,A\rightarrow
  B\vdash\Delta}{\Gamma\vdash A,\Delta&\Gamma, B\vdash\Delta}\]
A counter-example to the conclusion makes $A\rightarrow
  B$ and
everything in $\Gamma$ true; and makes everything in $\Delta$ false.

To make $A\rightarrow
  B$ true, either $A$ is false, in which case we have a
  counter-example to the first assumption, $\Gamma\vdash A,\Delta$,
or $B$ is true in which case we have a
  counter-example to the second assumption, $\Gamma, B\vdash\Delta$.

The other rules should be done as an exercise by the reader.
\end{proof}
This shows us, for example, that $A\wedge B \models A\vee B$, since we
gave a proof of $A\wedge B \vdash A\vee B$ above.

We can also prove our Example \ref{inf-swap}.
\[
 \infer[(\rightarrow R)]{A\rightarrow (B \rightarrow C)\vdash
  B\rightarrow (A \rightarrow C)}
{
\infer[(\rightarrow L)]{A\rightarrow (B \rightarrow C),B\vdash A
  \rightarrow C}
{\infer[(\rightarrow
  R)]{B\vdash A, A
  \rightarrow C}{\infer[(I)]{A,B\vdash A,C  }{}}
&
\infer[(\rightarrow L)]{B \rightarrow C,B\vdash A\rightarrow C}
{\infer[(I)]{B\vdash B,A\rightarrow C}{}
&\infer[(\rightarrow
  R)]{C,B\vdash A\rightarrow C}{\infer[(I)]{A,B,C\vdash C}{}}
}
}
}
\]
In each case we have little choice about which rules to apply. The
available rules are determined by the top-level connectives occurring
in our current goal. We can choose the order in which we apply them,
but we can just keep applying rules until there are no connectives
left. It is clear from the form of the rules that if the original goal
contains $N$ connectives, then the maximum depth of
the tree is $N+1$. 

Showing that a set of rules is sound is usually
straighforward. Showing completeness, that every valid sequent can be
proved, is often trickier. We will see that, for Gentzen's rules, propositional
completeness is also straightforward. First we need a lemma.

We have seen that Gentzen's rules are sound. They have a 
second crucial property.
\begin{lemma}
Each of Gentzen's rules is complete, in the sense that, if there is counter-example to any of the assumptions of 
one of these rules, then that counter-example is also a
counter-example to the conclusion. 
\end{lemma}
\begin{proof}
Again, this can be checked on a
case-by-case basis, and we give one example, leaving the rest as an
exercise.
We again focus on the ($\rightarrow L$) rule.
\[
\infer[(\rightarrow L)]{\Gamma ,A\rightarrow
  B\vdash\Delta}{\Gamma\vdash A,\Delta&\Gamma, B\vdash\Delta}\]
A counter-example to either of the assumptions will make everything 
in $\Gamma$ true; and makes everything in $\Delta$ false. A
counter-example to the first assumption will make $A$ false, and a
counter-example to the second will make $B$ true. So,
a counter-example to either of them will make $A\rightarrow B$ true,
and provide a counter-example to $\Gamma ,A\rightarrow
  B\vdash\Delta$.
\end{proof}

\begin{theorem}
\begin{equation*} 
\textrm{if }\Delta \models A \textrm{ then } \Delta \vdash A\phantom{(\textbf{soundness})}
\tag{\textbf{completeness}}\ 
\end{equation*}
\end{theorem}
\begin{proof}
 We simply have to consider what will
happen if we keep applying Gentzen's rules, in goal-directed fashion,
until we can do no more.

Once all connectives have been removed, after at most $N$ levels,
each remaining goal is a sequent containing only atomic expressions
(propositional letters). If the same atom occurs on both sides of the
turnstile, we can apply the immediate rule, to discharge the goal.
For any goal where there is no atom that occurs on both sides of the
turnstile,
the valuation that makes every atom to the left of the turnstile true,
and every atom to the right false, provides a counter-example.

This lemma tells us that we can push this
counter-example down, step-by-step, through the attempted proof.
When we reach the bottom we find that it must be a counter-example to
our original goal.

So, for every sequent we can find either a proof or a counter-example.
In particular, if we don't find a proof, we find a counter-example, which
shows that our set of rules is complete: any sequent for which
there is no counterexample is provable using this set of rules.
\end{proof}
\begin{exercise}Since the rules $(\wedge R), (\vee L), (\rightarrow
  L)$ split the search for a proof tree into two branches, a useful  heuristic 
is to defer the use of these rules until no other options are
available.

Use this heuristic to give a simpler proof of Example{inf-swap}
\[A\rightarrow (B \rightarrow C)\vdash
  B\rightarrow (A \rightarrow C)\] 
\end{exercise}
\begin{exercise}
  The Stoic philosophers of Ancient Greece defined five standardized
  forms of argument. These are called \emph{indemonstrables}\,---since
  they are taken as axiomatic, and require no proof.
  \begin{itemize}
  \item A first indemonstrable is an argument composed of a conditional and its antecedent as premises, having the consequent of the conditional as conclusion.
  \item A second indemonstrable is an argument composed of a conditional and the contradictory of its consequent as premises, having the contradictory of its antecedent as conclusion.
  \item A third indemonstrable is an argument composed of a negated conjunction and one of its conjuncts as premises, having the contradictory of the other conjunct as conclusion.
  \item A fourth indemonstrable is an argument composed of a disjunctive assertible and one of its disjuncts as premises, having the contradictory of the remaining disjunct as conclusion.
  \item A fifth indemonstrable, finally, is an argument composed of a disjunctive assertible and the contradictory of one of its disjuncts as premises, having the remaining disjunct as conclusion.
  \end{itemize}
  \begin{enumerate}
    \item Which of these corresponds to \emph{modus tollendo tollens}?
    \item Give rules in modern form for the other four indemonstrables.
  \end{enumerate}
\end{exercise}
\lfoot{\small{\copyright Michael Fourman 2014-2015}}

\end{document}
