The word 'completeness' is a tortured word that can mean many things in logic. It is probably one of the most overloaded and misused words in the field.
There are at least four different (or slightly different) uses of completeness:
The usual undergraduate presentation of logic uses what we call truth-theoretic or Tarski semantics: things mean what we think they do, in that means 'and,' means 'or,' and so on.
This leads to a fairly strict definition of truth:
In the first case, no external information is necessary (other than an assignment of true or false to each propositional variable). In the second case, a universe of elements is needed: all the basic propositions of first-order logic (which are usually equalities of terms or relations ) will be decided based on the interpretation of these primitives in that structure.
We then have two relations: logical consequence and semantic entailment.
That first-order logic is complete is the domain of Gödel's completeness theorem, which is harder.
Please expand.
Please expand.
Please expand.
The algebraic approach to logic takes us a step further from truth (but not exactly into proofs yet).
An algebraic semantics assigns to every formula of a logical system some sort of mathematical object . The collection of these objects is called a model, and usually organises into some structure, for example a Boolean or Heyting algebra.
One then has the usual notions of soundness and completeness:
In classical propositional logic:
In intuitionistic propositional logic:
In modal logic . . .
In classical (./First-order_Logic):
Notice the appearance of the word every in the definition of completeness above. This stands in sharp contrast with, say, the Tarski semantics of propositional logic, which is essentially just one model.
In fact, this universal quantification makes completeness proofs trivial. For example, no matter the presentation (Hilbert system, natural deduction, ...), classical propositional logic is complete for Boolean algebras. The proof of this is fact is deceptively simple, and is known by the posh name of the Lindenbaum-Tarski construction. Here are the main steps:
Proof-theoretically, there is a distinction between the aforementioned shallow notion of 'for every model' completeness, and full completeness.
The first one holds whenever an equality of proofs that holds in all models is also provable in the equational theory of proofs; it usually holds ''for free.''
Full completeness, on the other hand, is a term used to mean that every element of the model is the denotation of a certain proof.
A common source of confusion: incompleteness is not the negation of logical completeness!
There is a subtle point here relating to the object language/metalanguage dichotomy. When proving a completeness theorem, how strong does our metalanguage need to be?
For example, the usual proof of completeness of the usual truth-theoretic semantics of classical first-order logic uses classical reasoning in the metalanguage in a very strong way. Can we avoid this? That is, can we prove the completeness of the usual FOL semantics in an intuitionistic language? This might be of serious interest in the quest for the formalisation of mathematics, which often uses constructive languages, such as type theory.
Interestingly, it seems like the answer to this question is negative: we cannot constructively prove the completeness of classical FOL unless we change the notion of model of FOL. See the following preprint: