Thursday 28 January 2016

logic - Relative Consistency Lemma with finitistic proof



in set theory, one uses the following Lemma in order to provide relative consistency proofs. I have question concerning the proof of this lemma. First, here is the statement:



Suppose that $S$ and $T$ are theories over the language $\mathcal{L}(\in)$ of set theory and let $P$ be class (or, if you want, a new symbol added to $\mathcal{L}(\in)$). If $S\vdash\exists x P(x)$ and for all $\varphi\in T$, $S\vdash \varphi^{P}$, then the consistency of $S$ implies the consistency of $T$.



As far as I understand, the proof goes like this: assume that $T$ is inconsistent and let $\psi$ be such that $T\vdash\psi\wedge\neg\psi$.



Then one proves that this implies $S\vdash\psi^P\wedge\neg\psi^P$. In fact, one proves that for every $\psi$ such that $T\vdash\psi$, there holds $S\vdash\psi^P$ (nevertheless, $S\vdash\psi^P\wedge\neg\psi^P$ proves that $S$ is inconsistent, this is clear to me).




Now in order to prove the statement
\begin{align*}
T\vdash\psi \Rightarrow\ S\vdash\psi^P\ \ \ (*)
\end{align*}
one actually shows
\begin{align*}
T\vdash\psi \Rightarrow\ T^P\cup\{\exists xP(x)\}\vdash\psi^P\ \ \ (**),
\end{align*}
where $T^P:=\{\varphi^P\vert\ \varphi\in T\}$. By assumption, $(**)$ implies $(*)$, so it is enough to prove $(**)$. I have troubles to understand the proof of $(**)$. Going through the literature (mainly Kunen's book, but also some lecture notes), the idea for $(**)$ looks like this: given a deduction, i.e., a formal proof

\begin{align*}
\varphi_1\dots\varphi_{n-1}\psi
\end{align*}
from $T$ for $\psi$, one shows that
\begin{align*}
\exists xP(x)\varphi_1^P\dots\varphi_{n-1}^P\psi^P
\end{align*}
is a deduction from $T^P\cup\{\exists xP(x)\}$ for $\psi^P$.
Now, here's my question: where exactly do you need the premise $\exists xP(x)$ and how is it used to make sure that the deduction stays correct? It is clear to me that one needs it and I even worked out a proof for $(**)$ using Goedel's completeness theorem, i.e. using $\models$ instead of $\vdash$. There, I could clear see where you need $\exists xP(x)$, because you relativize the class $P$ to a set model and they need to be nonempty, therefore $\exists x P(x)$. So, intuitively, it is clear to me that the proof working with $\vdash$ must have the above structure, but I guess I oversaw something and implicitly used $\exists xP(x)$ where I didn't see it. However, I want to understand the syntactical proof without using completeness. The search in the literatur was quite unsatisfying for me so far because I always just found sketches of the proof.




By the way, I was working in Shoenfield's calculus/ deductive system of predicate logic. If someone has literature hints or links to a more detailed proof, I would be very grateful. Thank you very much.



P.S: Sorry for the long article!


Answer



You are asking where the assumption $\exists x P(x)$ is deployed in translating a proof of a sentence $\psi$ with assumptions drawn from a set of sentences $T$ into a proof of $\psi^P$ with assumptions drawn from $T^P \cup \{\exists x P(x)\}$. Here $\psi^P$, the relativisation of $\psi$ at $P$, means the formula obtained from $\psi$ by replacing subformulas of the form $\forall x \chi$ by $\forall x (P(x) \Rightarrow \chi)$ and sub formulas of the form $\exists x \chi$ by $\exists x(P(x)\land \chi)$.



To answer the question, I need to describe the translation.
A natural deduction style presentation of first-order logic is convenient for this.
(I don't have Kunen's book to hand and I have not read Shoenfield's book, so you will have to adapt this to the formalisations that those books use.)
What follows works for any first-order language (such as the language of set theory) that has only predicate symbols and no constant or function symbols, so that terms are just variables.




We will define by induction a function that maps a proof $\Pi$
with conclusion $\alpha$ and assumptions $\alpha_1, \ldots, \alpha_n$
into a proof $\Pi^P$ with conclusion $\alpha^P$ and assumptions
$\alpha_1^P, \ldots, \alpha_n^P, P(x_1), \ldots, P(x_k)$ for some
variables $x_1, \ldots, x_k$ that occur somewhere in $\Pi$.



Base case: we have a single formula
$\alpha$ and we translate it into $\alpha^P$




Propositional rules: relativisation commutes with the propositional connectives, so a propositional rule translates into an instance of the same rule.
E.g., for $\land$-introduction, the input is
$$
\begin{array}{ccc}
\vdots & & \vdots \\
\alpha & &\beta
\\\hline
&\alpha \land \beta&
\end{array}
$$

and we have already translated the proofs of $\alpha$ and $\beta$
into proofs of $\alpha^P$ and $\beta^P$. The $\land$-introduction then
translates into another $\land$-introduction:
$$
\begin{array}{ccc}
\vdots & & \vdots \\
\alpha^P & &\beta^P
\\\hline
&(\alpha \land \beta)^P&
\end{array}

$$



$\forall$-introduction:
the input is
$$
\begin{array}{c}
\vdots \\
\alpha
\\\hline
\forall x \alpha

\end{array}
$$
where $x$ does not appear free in any assumption.
We have already translated the proof of $\alpha$.
However, the translation may have added extra assumptions
$P(x)$. We do an $\Rightarrow$-introduction to discharge
all those assumptions, so that $x$ no longer appears
free in any assumption, and follow that by a $\forall$-introduction:
$$
\begin{array}{c}

\vdots \\
\alpha^P
\\\hline
P(x) \Rightarrow \alpha^P
\\\hline
(\forall x \alpha)^P
\end{array}
$$



$\forall$-elimination: the input is

$$
\begin{array}{c}
\vdots \\
\forall x \alpha
\\\hline
\alpha[y/x]
\end{array}
$$
and we have already translated the proof of $\forall x \alpha$.
A $\forall$-elimination gives us an implication, which

we eliminate by adding an assumption $P(y)$:
$$
\begin{array}{ccc}
P(y) & &
\begin{array}{c}
\vdots \\
(\forall x \alpha)^P
\\\hline
P(y) \Rightarrow (\alpha[y/x])^P
\end{array}

\\\hline
& (\alpha[y/x])^P
\end{array}
$$



$\exists$-introduction: the input is
$$
\begin{array}{c}
\vdots \\
\alpha

\\\hline
\exists x \alpha
\end{array}
$$
and we have already translated the proof of $\alpha$. To prove
$(\exists x\alpha)^P$ we have to add the assumption $P(x)$.
$$
\begin{array}{ccc}
&& \vdots \\
P(x) && \alpha^P

\\\hline
&
\begin{array}{c}
P(x) \land \alpha^P
\\\hline
(\exists x \alpha)^P
\end{array}
\end{array}
$$




$\exists$-elimination: the input is
$$
\begin{array}{ccc}
& & \alpha[y/x] \\
\vdots && \vdots \\
\exists x\alpha && \beta
\\\hline
& \beta &
\end{array}
$$

where $y$ does not appear free in $\beta$ or any assumption
other than the assumptions $\alpha[y/x]$, which are discharged
by this rule.
Renaming $y$ if necessary in the right-hand subproof, we may
assume that $y$ does not appear anywhere in the left-hand subproof.
We have already translated the subproofs. The translation of
the right-hand subproof may have additional assumptions
in which $y$ appears free, but these will all have the form $P(y)$.
We put $\land$-eliminations over the assumptions $(\alpha[y/x])^P$ and $P(y)$ in the translation of the right-hand subproof and then do an $\exists$-elimination:
$$

\begin{array}{ccc}
\begin{array}{c}
\vdots
\\(\exists x\alpha)^P
\end{array}
& &
\begin{array}{ccc}
\begin{array}{c}
P(y) \land (\alpha[y/x])^P
\\\hline

(\alpha[y/x])^P
\\\vdots
\end{array}
& &
\begin{array}{c}
P(y) \land (\alpha[y/x])^P
\\\hline
P(y)
\\\vdots
\end{array} \\\hline

& \beta^P &
\end{array}
\\\hline
& \beta^P
\end{array}
$$



That completes the definition of the translation. If we apply it to
a proof $\Pi$ of a sentence $\psi$ with assumptions drawn from the set
of sentences $T$, the result is a proof $\Pi^P$ of $\psi^P$ with assumptions

drawn from $T^P$ and possibly additional assumptions of the form $P(x)$.
The variables $x$ in these additional assumptions are the only free
variables in the assumptions of $\Pi^P$. We can therefore use
$\exists$-elimination to eliminate the additional assumptions in favour
of the assumption $\exists x P(x)$, resulting in a proof of $\psi^P$
with assumptions drawn from $T^P \cup \{\exists x P(x)\}$, which was what we wanted. It is this final step where the assumption $\exists x P(x)$ is deployed.


No comments:

Post a Comment

real analysis - How to find $lim_{hrightarrow 0}frac{sin(ha)}{h}$

How to find $\lim_{h\rightarrow 0}\frac{\sin(ha)}{h}$ without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...