Thursday, 28 January 2016

logic - Relative Consistency Lemma with finitistic proof



in set theory, one uses the following Lemma in order to provide relative consistency proofs. I have question concerning the proof of this lemma. First, here is the statement:



Suppose that S and T are theories over the language L() of set theory and let P be class (or, if you want, a new symbol added to L()). If SxP(x) and for all φT, SφP, then the consistency of S implies the consistency of T.



As far as I understand, the proof goes like this: assume that T is inconsistent and let ψ be such that Tψ¬ψ.



Then one proves that this implies SψP¬ψP. In fact, one proves that for every ψ such that Tψ, there holds SψP (nevertheless, SψP¬ψP proves that S is inconsistent, this is clear to me).




Now in order to prove the statement
Tψ SψP   ()
one actually shows
Tψ TP{xP(x)}ψP   (),
where TP:={φP| φT}. By assumption, () implies (), so it is enough to prove (). I have troubles to understand the proof of (). Going through the literature (mainly Kunen's book, but also some lecture notes), the idea for () looks like this: given a deduction, i.e., a formal proof

φ1φn1ψ
from T for ψ, one shows that
xP(x)φP1φPn1ψP
is a deduction from TP{xP(x)} for ψP.
Now, here's my question: where exactly do you need the premise xP(x) and how is it used to make sure that the deduction stays correct? It is clear to me that one needs it and I even worked out a proof for () using Goedel's completeness theorem, i.e. using instead of . There, I could clear see where you need xP(x), because you relativize the class P to a set model and they need to be nonempty, therefore xP(x). So, intuitively, it is clear to me that the proof working with must have the above structure, but I guess I oversaw something and implicitly used xP(x) where I didn't see it. However, I want to understand the syntactical proof without using completeness. The search in the literatur was quite unsatisfying for me so far because I always just found sketches of the proof.




By the way, I was working in Shoenfield's calculus/ deductive system of predicate logic. If someone has literature hints or links to a more detailed proof, I would be very grateful. Thank you very much.



P.S: Sorry for the long article!


Answer



You are asking where the assumption xP(x) is deployed in translating a proof of a sentence ψ with assumptions drawn from a set of sentences T into a proof of ψP with assumptions drawn from TP{xP(x)}. Here ψP, the relativisation of ψ at P, means the formula obtained from ψ by replacing subformulas of the form xχ by x(P(x)χ) and sub formulas of the form xχ by x(P(x)χ).



To answer the question, I need to describe the translation.
A natural deduction style presentation of first-order logic is convenient for this.
(I don't have Kunen's book to hand and I have not read Shoenfield's book, so you will have to adapt this to the formalisations that those books use.)
What follows works for any first-order language (such as the language of set theory) that has only predicate symbols and no constant or function symbols, so that terms are just variables.




We will define by induction a function that maps a proof Π
with conclusion α and assumptions α1,,αn
into a proof ΠP with conclusion αP and assumptions
αP1,,αPn,P(x1),,P(xk) for some
variables x1,,xk that occur somewhere in Π.



Base case: we have a single formula
α and we translate it into αP




Propositional rules: relativisation commutes with the propositional connectives, so a propositional rule translates into an instance of the same rule.
E.g., for -introduction, the input is
αβαβ

and we have already translated the proofs of α and β
into proofs of αP and βP. The -introduction then
translates into another -introduction:
αPβP(αβ)P



-introduction:
the input is
αxα
where x does not appear free in any assumption.
We have already translated the proof of α.
However, the translation may have added extra assumptions
P(x). We do an -introduction to discharge
all those assumptions, so that x no longer appears
free in any assumption, and follow that by a -introduction:
αPP(x)αP(xα)P



-elimination: the input is

xαα[y/x]
and we have already translated the proof of xα.
A -elimination gives us an implication, which

we eliminate by adding an assumption P(y):
P(y)(xα)PP(y)(α[y/x])P(α[y/x])P



-introduction: the input is
αxα
and we have already translated the proof of α. To prove
(xα)P we have to add the assumption P(x).
P(x)αPP(x)αP(xα)P




-elimination: the input is
α[y/x]xαββ

where y does not appear free in β or any assumption
other than the assumptions α[y/x], which are discharged
by this rule.
Renaming y if necessary in the right-hand subproof, we may
assume that y does not appear anywhere in the left-hand subproof.
We have already translated the subproofs. The translation of
the right-hand subproof may have additional assumptions
in which y appears free, but these will all have the form P(y).
We put -eliminations over the assumptions (α[y/x])P and P(y) in the translation of the right-hand subproof and then do an -elimination:
(xα)PP(y)(α[y/x])P(α[y/x])PP(y)(α[y/x])PP(y)βPβP



That completes the definition of the translation. If we apply it to
a proof Π of a sentence ψ with assumptions drawn from the set
of sentences T, the result is a proof ΠP of ψP with assumptions

drawn from TP and possibly additional assumptions of the form P(x).
The variables x in these additional assumptions are the only free
variables in the assumptions of ΠP. We can therefore use
-elimination to eliminate the additional assumptions in favour
of the assumption xP(x), resulting in a proof of ψP
with assumptions drawn from TP{xP(x)}, which was what we wanted. It is this final step where the assumption xP(x) is deployed.


No comments:

Post a Comment

real analysis - How to find limhrightarrow0fracsin(ha)h

How to find lim without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...