Desktop version

Home arrow Philosophy arrow Advances in Proof-Theoretic Semantics

Tarski's Definition of Logical Consequence

During the years 1929–1933 Tarski put together a definition of the concept 'φ is a true sentence of the language L' [16], which has become known as 'Tarski's definition of truth'. Tarski stated some very strict conditions that his definition had to meet. All symbols of the language L (apart from punctuation—we ignore this below) must be fully meaningful. The definition is written in the formalised metalanguage of L, but justified in the informal meta-metalanguage. It must use only higher-order logic, concepts expressible in the language L itself, and some syntactic notions. It must be extensionally correct: the objects satisfying it must be exactly the objects that we count intuitively as true sentences of L. The extensional correctness must be informally provable in the meta-metatheory of L. This is not the place to go into further details. The paper became well known through a German translation in 1935. It makes no reference to models, and model theorists don't cite it.

In 1935 Tarski was persuaded to attend the International Congress of Philosophers in Paris. Worrying about what he could say to impress the philosophers, he formed the idea of presenting the truth definition as a vehicle for giving formal definitions of various notions from logical metatheory, among them the notion of logical consequence. The result was a pair of papers, [17] presenting the definition of logical consequence, and [18] discussing the general idea of defining semantic notions [5, pp. 95ff].

The paper [17] on logical consequence answered a methodological question, not a question of conceptual analysis. You can't do conceptual analysis until you have a concept to analyse. But when Tarski wrote, there was no agreed concept of logical consequence to be analysed. (One should look first at what was available in the literature of his time. For example Hilbert and Ackermann [8, p. 1] have a proof-theoretic notion of Logische Folgerung, while Carnap [3, p. 10] speaks of one proposition being a Grund for another, without any clear definition. Tarski may also have factored in earlier ideas, like Bolzano's Ableitbarkeit and various medieval notions of consequentia.) Tarski makes exactly this point in his opening paragraph, noting that 'every precise definition of this concept will show arbitrary features to a greater or [lesser] degree' [19, p. 409]. In fact the term 'logical consequence' itself seems to have become common in philosophical logic only as a result of Tarski's paper.

To see what the methodological question was, we need to put the paper in context. Gödel had recently shown that there is no maximal proof calculus for pure logic of second or higher order. Ramsey [13] had discussed languages with infinite conjunctions, and both Bernays [1, pp. 86ff.] and Tarski himself [19, p. 288] had considered proof rules with infinitely many premises. So some very general questions about proof calculi were in the air, and some robust and well-motivated definitions were needed for handling them. Tarski seems to have clarified the central question in his own mind along the following lines:

What are the weakest constraints that we can put on a rule for deriving propositions from sets of propositions in a formal language, which make it reasonable to count any rule satisfying these constraints as an inference rule?

He proposed to label these constraints as saying that the conclusion of the rule is a 'logical consequence' of its premises.

Now for Tarski in 1935 there were two kinds of formal language. In the first kind, which we can call 'pure' languages, all symbols are logical. In the second kind, which we can call 'applied', there are also nonlogical symbols, but these symbols are all required to be fully meaningful. For pure languages, Tarski adopted just the constraint that whenever the premises are true the conclusion must be true too. This constraint looks trivial, but in Paris in 1935 it served the purpose of advertising his recent formal definition of 'true'.

For applied languages Tarski had to decide what to do about analytical relations between the meanings of the nonlogical constants. For example Hilbert in his Göttingen lectures around 1920 (which formed the basis of his book with Ackermann) had observed that 'Tony Blair is a parent' entails 'Tony Blair has a child' (my adaptation of Hilbert's more traditional example). Would it be appropriate to allow an inference rule that takes one from the first sentence to the second? Tarski decided no. An inference rule should be invariant under systematic changes of the meanings of the nonlogical symbols; but if we swap the meanings of 'has a child' and 'has bright red hair', then the proposed inference rule would take a true premise to a false conclusion.

It's noticeable that Tarski's own text says almost nothing about relations between the meanings of the nonlogical constants (there is a brief parenthetical remark in the middle of P. 415 in [19]), but has at least a page on the importance of the difference between (i) changing a symbol to one with a different meaning and (ii) replacing the symbol by a variable that arbitrary objects can be assigned to. That tells me that Tarski in 1935 was really more interested in fine-tuning the notion of satisfaction than in accommodating the philosophers in Paris.

The paper does use the word 'model', though not in the modern sense. The name 'model-theoretic definition of logical consequence' is not Tarski's, and I think it came into use only after the later developments that we turn to next.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics