Home Philosophy Advances in Proof-Theoretic Semantics
Defining Logical Consequence
In both his truth definition and his definition of logical consequence, Tarski set new standards of carefulness about the requirements he was imposing on the definitions: what concepts could be used in the definitions, and what assumptions could be used in the justifications of the definitions. You can attack his definitions either by showing that they failed to meet the requirements, or by arguing that the requirements were inappropriate for his purposes. Or of course you can propose some different requirements that suit a different agenda. This third option wouldn't be an attack on Tarski; it would be an alternative venture.
Here is an example of an alternative venture. Suppose you want the definition of logical consequence to have the following property:
For any propositions φ and ψ, if the definition of 'ψ is a logical consequence of φ' is that r(φ, ψ), then the statement r(φ, ψ) states criteria that can be used for convincing ourselves that ψ is (or is not) a logical consequence of φ.
To make this realistic, maybe we should add 'at least in simple or straightforward cases'. Also if you were a cognitive scientist, you might want to strengthen to 'the criteria that we would in fact use for convincing ourselves . . .'; then the definition would express a theory about how we think.
It's not hard to show that Tarski's definition doesn't have this property. For Tarski the statement r(φ, ψ) takes the form
For every interpretation or model M, if M makes φ true then M makes ψ true. Because of the quantifier over all M, in practice the only way of showing that r(φ, ψ)
holds will normally be to show the stronger statement
For every interpretation or model M,'M makes ψ true' is a logical consequence of 'M makes φ true'.
But this is just a more complicated variant of 'ψ is a logical consequence of φ', so it can't provide the criteria we asked for.
Prawitz presents this argument very clearly [11, p. 67f.]. But the basic point is older. It goes back at least to Ibn S¯ına¯, who used it to argue that you can't use the notion 'true in situation S' as a device for making the validity of an inference intuitively clear. (This appears in his Qiya¯s iii.2, unfortunately still available only in Arabic.) Several people including me have suggested that the argument poses at least a theoretical difficulty for those mental model theorists who maintain that we do in fact reason by making the kind of move that Ibn S¯ına¯ criticised. So I don't think that proof-theoretic semanticists who present the argument should assume they are in any way swimming against the tide.
Looking around the literature in proof-theoretic semantics, I don't in fact see anything that I would regard as a criticism of Tarski's definition. Things that are
phrased as attacks on the definition are usually pleas for a different agenda. Nothing compels us to stick to the agendas of eighty years ago.
A striking pair of papers by Peter Schroeder-Heister  and Kosta Došen  raise a number of questions about the nature of definitions, and about what can be defined in terms of what. I very much welcome the questions—the general theory of definition has had a very patchy treatment by logicians in the last century—and I agree with most of the positive points that Peter and Kosta make. But some of their claims about the views of other people seem to me mighty strange.
At the heart of their arguments against 'model-theoretic semantics' is the question what can be defined in terms of what. This was a question of constant interest to the traditional Aristotelian logicians, and a large part of what they said about it strikes me as codswallop. Ouch—on general principle one shouldn't say that sort of thing about the logic of a distant culture. But what else can you say about people who insist that the only correct definition of 'human' is 'mortal rational animal', and give only circular arguments in support of this view?
There are still people who operate a broadly Aristotelian notion of the hierarchy of concepts. One notable example is the linguist Anna Wierzbicka [21, cf. p. 10]. She seems to operate by a kind of introspection of concepts. The main difficulty of introspection is that you can never be sure what is the source of the information that it serves up. I think in fact there are two main kinds of reason for regarding concept C as prior to concept D in the hierarchy of definitions. Both these reasons can in principle be lifted out of introspection and made objective, which is always an improvement.
The first kind of reason is that because of the way our minds work, we wouldn't be able to understand D unless we already understood C. For example could you understand what it is to be vengeful if you didn't already understand what it is to be angry? Could you understand what it is to be infectious if you didn't understand what it is to be ill? Or closer to home, could you come to have a concept of satisfaction if you didn't already have a concept of truth? In theory at least, questions of these kinds can be answered by seeing what you can teach to children, or whether there are natural languages in which there is a word for D but no word for C. There are surely important cognitive facts to be discovered here, but I for one would rather leave it to the experts.
The second kind of reason is not cognitive but semantic. An example is that you can define 'x is a mother' in terms of 'x is the mother of y' by quantifying out the y, but there is no logical operation that goes in the opposite direction. To handle examples like this, it's almost essential to put in the variables, because the whole point is that 'mother of' has an extra argument that is missing in 'mother'—it has an extra degree of freedom. In fact Tarski and his teacher Les´niewski seem to have been the first logicians who insisted on putting variables where they are needed, though Frege had already raised the point.
Kosta's paper does draw attention to one place where variables are needed. He points out (in his §4) that a notation for derivations which only allows us to put a variable for the conclusion is much less useful than a notation that allows us to a variable for a hypothesis as well. This is clearly correct, and I can say so with an easy
conscience because I have already (in (7) above) used a notation that does precisely have variables for the hypotheses. My notation is very standard, but in fact it's not the one that Kosta himself recommends. In effect Kosta, working in a categorial framework, calls for a notation that sets out the variables in the concept
f is a derivation of B from A. (9)
My notation doesn't show the f , but if needed one could write an f in the middle of the triangle. Also Kosta's notation can be written in a line; this is an advantage in text, but possibly a hindrance for writing out pictures of complex derivations. On the other hand my notation has the advantage that it allows one to write several hypotheses, whereas Kosta's arrow notation allows just one source for the arrow; for my application in (7) above, that would have been a fatal flaw. As all this illustrates, there are some quite subtle relationships between notation and concept, and they are very sensitive to the purpose that the notation will be put to, and the mathematical context in which it will be used.
But elsewhere Kosta forgets the variables. For example he asks [4, §5]:
Can inferences be reduced to consequence relations? So that having an inference from A to B means just that B is a consequence of A. (10)
where should the variables go? I suggest that the concept of an inference needs three variables, essentially as in Kosta's notation (9) for derivations:
x is an inference from y to z. (11)
The notion of consequence carries just two variables:
x is a consequence of y. (12)
Kosta's question (10) asks whether (11) is definable from (12), and he expects the answer No.
Clearly Kosta is right: (11) is not definable from (12) (and a fortiori not 'reducible to' (12)) for the glaring semantic reason that (11) carries an extra argument. This is not just an accident of Kosta's formulation. It's an essential part of the notion of z being inferable from y that people can perform an act called making an inference from y to z, but it is certainly not part of the notion of consequence that people can make a consequence. And I agree with Kosta that this is a point worth making. I also agree with him that for purposes of the foundations of logic, a psychological analysis of 'making an inference' is not the right way to go.
But then why does Kosta add this comment?
This reduction of inference to implication, which squares well with the second dogma of semantics, is indeed the point of view of practically all of the philosophy of logic and language in the twentieth century.
(He explains that 'implication' serves for 'consequence' here, so it is the same reduction as above.) Kosta seems here to be saying that the vast mass of twentieth century researchers in philosophy of logic and language all make a mistake not far short of adding 2 to 4 and getting 11. Sad to say, he is right that there are one or two professionals in this field who lack this elementary competence; I could document this but I won't. But 'practically all . . .': that seems to me an unreasonable accusation to make with no evidence offered.
Kosta also refers to 'the second dogma of semantics'. As Kosta formulates it in his §3 (adjusting a similar statement in Peter's ), this dogma states
The correctness of the hypothetical notions reduces to the preservation of the correctness of the categorical ones.
If I understand this right, the notion of z being inferable from y is 'hypothetical' because one gets to z by using y as a 'hypothesis'. The act of doing this is essentially the same as the act of making an inference from y to z, so we are hovering around the same semantic distinction as before. But I don't think I recall ever hearing anybody argue that the notion of making an inference can be defined in terms of something being a Tarskian consequence of something else. Rather the opposite: Tarski gave his definition at least partly so that a usable notion of consequence was available to people who weren't interested in the notion of making an inference. It's a big world, there are lots of different things to be interested in. Preferring to work on B rather than A is not a kind of dogma.
Kosta adds that the second dogma 'may be understood as a corollary' of a dogma that categorical notions have 'primacy' over hypothetical notions. [4, §3] In the mainstream semantic and model-theoretic literature that I've seen, nobody talks about 'prior' notions or about one notion having 'primacy' over another. So the burden is on those who use these terms to explain what they mean by them, and what evidence they have for attributing views that involve these terms to semanticists. Otherwise it's they that are the dogmatists.
Peter has asked whether people who use Tarski's truth definition regard satisfaction as prior to truth. It's a reasonable question, but I think that the answer is a straight No, except in a technical sense that is probably not much relevant to this paper. Tarski's truth definition goes by recursion on the complexity of formulas. It's a common mathematical experience that when we define or prove something by recursion, it can be nontrivial to formulate the notion that we carry up through the recursion. Often it will need to carry extra features that can be discarded at the end of the recursion. The notion of satisfaction was a technical requirement of just this sort, needed for the recursive definition. But if the question is about having informal concepts of truth and satisfaction, then my own view has always been that satisfaction has to be understood in terms of truth and not the other way round. I should add that this is a question I came to through trying to give an intuitive introduction to model theory for non-model-theorists. It's not a question that model theorists ever have to deal with in their normal business.
|< Prev||CONTENTS||Next >|