Desktop version

Home arrow Philosophy arrow Advances in Proof-Theoretic Semantics

< Prev   CONTENTS   Next >

Defining Meanings in General

We can separate out two strands in the aims of proof-theoretic semantics. One is to use proof theory to specify the meanings of logical constants. This can be generalised to specifying the meanings of other expressions too. (Peter tells me he would welcome faster progress in this direction, for example using more advanced proof-theoretic tools like those used to handle inductive definitions.) The other is to give a good description of logical consequence from the point of view of proof theory. I assume Peter's invitation was to comment on both of these aims. In this section I tackle meanings in general, and in the next section I turn to logical consequence.

Defining Meanings: Specialise Then Generalise

In the introduction to his Stanford Encyclopedia entry on 'Proof-Theoretic Semantics' [15] Peter says:

. . . the meaning of a term should be explained by reference to the way it is used in our language.

That's a very reasonable starting-point. I wasn't clear whether Peter takes 'our language' to be English (or German), or a formal language used in logic, but I'll assume the former. Paraphrasing Peter's statement a little, the meaning of an expression E in a language L is what you need to know in order to use E in L. But we should exclude purely grammatical information about E , so a safer statement is The meaning of an expression E in a language L is the further information that you need in order to use E in L, if you already know the grammatical facts about E .

There is more to be said on this, but not here.

Straight away we hit a problem. Life is open-ended, and so is language. The same expression can be used in indefinitely many different situations, and a priori there is no reason to think we can write down the rules for using the expression in a manageable description that covers all cases. This certainly applies to the logical constants 'and', 'every' and so on, which occur throughout the language and not just in contexts of logical argument.

So in practice we do what linguists have to do constantly in their studies. We narrow down to a set of contexts that we can handle, and we give rules for using the expression in those contexts. Then we rely on general facts about life and language to determine how the expression would be used in other contexts. I will call the narrow set of contexts the primary applications, and I will call the arguments used for generalising from the primary applications to the whole language the transfer arguments.

The 'Frege-Geach problem' illustrates these notions. In 1965 Peter Geach wrote a paper [7] in which—among other things—he attacked the view that you can explain the meaning of the sentence

He hit her. (4)

by saying that it ascribes a certain kind of action to 'him'. Geach argues that this explanation won't carry over to contexts where (4) is used but not asserted, for example when it follows the word 'If'. In contexts where (4) is not asserted, it doesn't ascribe anything. But, says Geach, the explanation needs to be carried over to these contexts, because we can apply modus ponens and argue

He hit her. If he hit her then q. Therefore q.

Moreover the two occurrences of the sentence, 'by itself and in the “if” clause, must have the same sense if the modus ponens is not to be vitiated by equivocation' [7, p. 462f].

I used to think that Geach's argument was a very clever way of refuting all sorts of plausible theories. I still think it's clever, but now it seems to me to prove almost nothing. When we explain how an expression is used in certain contexts, transfer arguments will always be needed to infer how it is used in other contexts. In fact looking again at Geach's paper, I see that this agrees with his conclusion:

. . . it is up to [the person giving this kind of explanation] to give an account of the role of “ p” that will allow of its standing as a premise. This task is pretty consistently shirked. [7, p. 463]

The key point that Geach contributes is that the validity of the modus ponens argument is a constraint on possible transfer arguments.

We must ask: Who has the responsibility for handling the transfer arguments?

To illustrate with 'and': a person who is explaining 'the way it is used in our language' will need to explain its use not just between propositions in deductions, but also such uses as

formally correct and materially adequate; black and white. (5) There are subtleties here: a formally correct and materially adequate definition is a

formally correct definition that is also materially adequate, but a black and white cat

is not a black cat that is also white. How did we know this?

You might argue that this property of 'black and white' is something for the linguists to worry about, and not a thing that proof theorists could be expected to have views on. But on the other hand linguists can't make bricks without straw: if the proof theorists expect the linguists to explain how the proof-theoretic meaning of 'and' transfers to uses like those in (5), then they must be prepared for the linguists to complain that the proof-theoretic meaning just isn't enough to generalise from. Somebody has to take responsibility for the join-up.

The point is very general. For example an explanation of the meaning of 'He hit her' in terms of truth conditions raises the question how we can infer what it means to say

Last Friday Zayd hit Amr very hard, to teach him a lesson.

Obviously if you specified the meaning of 'hit' as the set of ordered pairs (a, b) such that a hit b, then you are going to have serious problems answering this question. (I stole this example from the great 11th century semanticist Abd al-Qa¯hir al-Jurja¯n¯ı. Today people working on the semantics of tree-adjoining grammars wrestle with the same problem.)

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics