What obstructions do you run into if you're trying to develop first order logic when you only have finitely many constant, relation, and function symbols?
Are there any cases where you actually need infinite symbols? It just seems like you'd be able to do almost everything if you just have finite, but a "large enough" number of symbols. Though I suppose there might be some axiom schemas that you can't express.
What I'm looking for is a theorem of classical logic where you can't prove it with finite symbols, even in some limited case like "if you have at least $f(n)$ symbols, then you can prove this for $n$ (formulas, -length proofs, etc.)," or infinite axiom schemas that require an infinite number of symbols, or other problems that I haven't thought of.
Answer
The overwhelming majority of mathematical theories are framed in a language that has only finitely many symbols, with the exception that infinitely many variable symbols are allowed. The arguably most important theory, ZF, has a single binary predicate symbol.
We can sneak around the infinitude of variable symbols by starting with the variable symbol $x$, and constructing others by appending an arbitrary number of occurrences of a new symbol $|$.
For some theoretical purposes, such as proving the Upward Lowenheim-Skolem Theorem, or for Skolemization, it is useful to consider languages with infinitely many constant symbols, and even function symbols. And for constructing "rich" non-standard models of analysis, it is useful to have constant symbols, function symbols, and predicate symbols, for all reals, all functions on $\mathbb{R}^n$, and all relations. So certainly I would not want to restrict attention to "small" languages, but for most mathematical purposes one can.
No comments:
Post a Comment